Sample records for image quality finally

  1. Positional Quality Assessment of Orthophotos Obtained from Sensors Onboard Multi-Rotor UAV Platforms

    PubMed Central

    Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer

    2014-01-01

    In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart. PMID:25587877

  2. Positional quality assessment of orthophotos obtained from sensors onboard multi-rotor UAV platforms.

    PubMed

    Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer

    2014-11-26

    In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart.

  3. Paediatric x-ray radiation dose reduction and image quality analysis.

    PubMed

    Martin, L; Ruddlesden, R; Makepeace, C; Robinson, L; Mistry, T; Starritt, H

    2013-09-01

    Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%-55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children.

  4. Digitizing an Analog Radiography Teaching File Under Time Constraint: Trade-Offs in Efficiency and Image Quality.

    PubMed

    Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K

    2017-02-01

    We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.

  5. TRMM Microwave Imager (TMI) Updates for Final Data Version Release

    NASA Technical Reports Server (NTRS)

    Kroodsma, Rachael A; Bilanow, Stephen; Ji, Yimin; McKague, Darren

    2017-01-01

    The Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) dataset released by the Precipitation Processing System (PPS) will be updated to a final version within the next year. These updates are based on increased knowledge in recent years of radiometer calibration and sensor performance issues. In particular, the Global Precipitation Measurement (GPM) Microwave Imager (GMI) is used as a model for many of the TMI version updates. This paper discusses four aspects of the TMI data product that will be improved: spacecraft attitude, calibration and quality control, along-scan bias corrections, and sensor pointing accuracy. These updates will be incorporated into the final TMI data version, improving the quality of the data product and ensuring accurate geophysical parameters can be derived from TMI.

  6. Predicting perceptual quality of images in realistic scenario using deep filter banks

    NASA Astrophysics Data System (ADS)

    Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang

    2018-03-01

    Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.

  7. Infrared and visible image fusion method based on saliency detection in sparse domain

    NASA Astrophysics Data System (ADS)

    Liu, C. H.; Qi, Y.; Ding, W. R.

    2017-06-01

    Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.

  8. Self-correcting multi-atlas segmentation

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Wilford, Andrew; Guo, Liang

    2016-03-01

    In multi-atlas segmentation, one typically registers several atlases to the new image, and their respective segmented label images are transformed and fused to form the final segmentation. After each registration, the quality of the registration is reflected by the single global value: the final registration cost. Ideally, if the quality of the registration can be evaluated at each point, independent of the registration process, which also provides a direction in which the deformation can further be improved, the overall segmentation performance can be improved. We propose such a self-correcting multi-atlas segmentation method. The method is applied on hippocampus segmentation from brain images and statistically significantly improvement is observed.

  9. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  10. Full-reference quality assessment of stereoscopic images by learning binocular receptive field properties.

    PubMed

    Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2015-10-01

    Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.

  11. Applications of emerging imaging techniques for meat quality and safety detection and evaluation: A review.

    PubMed

    Xiong, Zhenjie; Sun, Da-Wen; Pu, Hongbin; Gao, Wenhong; Dai, Qiong

    2017-03-04

    With improvement in people's living standards, many people nowadays pay more attention to quality and safety of meat. However, traditional methods for meat quality and safety detection and evaluation, such as manual inspection, mechanical methods, and chemical methods, are tedious, time-consuming, and destructive, which cannot meet the requirements of modern meat industry. Therefore, seeking out rapid, non-destructive, and accurate inspection techniques is important for the meat industry. In recent years, a number of novel and noninvasive imaging techniques, such as optical imaging, ultrasound imaging, tomographic imaging, thermal imaging, and odor imaging, have emerged and shown great potential in quality and safety assessment. In this paper, a detailed overview of advanced applications of these emerging imaging techniques for quality and safety assessment of different types of meat (pork, beef, lamb, chicken, and fish) is presented. In addition, advantages and disadvantages of each imaging technique are also summarized. Finally, future trends for these emerging imaging techniques are discussed, including integration of multiple imaging techniques, cost reduction, and developing powerful image-processing algorithms.

  12. No-reference image quality assessment based on statistics of convolution feature maps

    NASA Astrophysics Data System (ADS)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  13. Image Quality of the Helioseismic and Magnetic Imager (HMI) Onboard the Solar Dynamics Observatory (SDO)

    NASA Technical Reports Server (NTRS)

    Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.

    2011-01-01

    We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.

  14. An image analysis of TLC patterns for quality control of saffron based on soil salinity effect: A strategy for data (pre)-processing.

    PubMed

    Sereshti, Hassan; Poursorkh, Zahra; Aliakbarzadeh, Ghazaleh; Zarre, Shahin; Ataolahi, Sahar

    2018-01-15

    Quality of saffron, a valuable food additive, could considerably affect the consumers' health. In this work, a novel preprocessing strategy for image analysis of saffron thin layer chromatographic (TLC) patterns was introduced. This includes performing a series of image pre-processing techniques on TLC images such as compression, inversion, elimination of general baseline (using asymmetric least squares (AsLS)), removing spots shift and concavity (by correlation optimization warping (COW)), and finally conversion to RGB chromatograms. Subsequently, an unsupervised multivariate data analysis including principal component analysis (PCA) and k-means clustering was utilized to investigate the soil salinity effect, as a cultivation parameter, on saffron TLC patterns. This method was used as a rapid and simple technique to obtain the chemical fingerprints of saffron TLC images. Finally, the separated TLC spots were chemically identified using high-performance liquid chromatography-diode array detection (HPLC-DAD). Accordingly, the saffron quality from different areas of Iran was evaluated and classified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Examples of subjective image quality enhancement in multimedia

    NASA Astrophysics Data System (ADS)

    Klíma, Miloš; Pazderák, Jiří; Fliegel, Karel

    2007-09-01

    The subjective image quality is an important issue in all multimedia imaging systems with a significant impact onto QoS (Quality of Service). For long time the image fidelity criterion was widely applied in technical systems esp. in both television and image source compression fields but the optimization of subjective perception quality and fidelity approach (such as the minimum of MSE) are very different. The paper presents an experimental testing of three different digital techniques for the subjective image quality enhancement - color saturation, edge enhancement, denoising operators and noise addition - well known from both the digital photography and video. The evaluation has been done for extensive operator parameterization and the results are summarized and discussed. It has been demonstrated that there are relevant types of image corrections improving to some extent the subjective perception of the image. The above mentioned techniques have been tested for five image tests with significantly different image characteristics (fine details, large saturated color areas, high color contrast, easy-to-remember colors etc.). The experimental results show the way to optimized use of image enhancing operators. Finally the concept of impressiveness as a new possible expression of subjective quality improvement is presented and discussed.

  16. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  17. The effect of image quality, repeated study, and assessment method on anatomy learning.

    PubMed

    Fenesi, Barbara; Mackinnon, Chelsea; Cheng, Lucia; Kim, Joseph A; Wainman, Bruce C

    2017-06-01

    The use of two-dimensional (2D) images is consistently used to prepare anatomy students for handling real specimen. This study examined whether the quality of 2D images is a critical component in anatomy learning. The visual clarity and consistency of 2D anatomical images was systematically manipulated to produce low-quality and high-quality images of the human hand and human eye. On day 0, participants learned about each anatomical specimen from paper booklets using either low-quality or high-quality images, and then completed a comprehension test using either 2D images or three-dimensional (3D) cadaveric specimens. On day 1, participants relearned each booklet, and on day 2 participants completed a final comprehension test using either 2D images or 3D cadaveric specimens. The effect of image quality on learning varied according to anatomical content, with high-quality images having a greater effect on improving learning of hand anatomy than eye anatomy (high-quality vs. low-quality for hand anatomy P = 0.018; high-quality vs. low-quality for eye anatomy P = 0.247). Also, the benefit of high-quality images on hand anatomy learning was restricted to performance on short-answer (SA) questions immediately after learning (high-quality vs. low-quality on SA questions P = 0.018), but did not apply to performance on multiple-choice (MC) questions (high-quality vs. low-quality on MC questions P = 0.109) or after participants had an additional learning opportunity (24 hours later) with anatomy content (high vs. low on SA questions P = 0.643). This study underscores the limited impact of image quality on anatomy learning, and questions whether investment in enhancing image quality of learning aids significantly promotes knowledge development. Anat Sci Educ 10: 249-261. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  18. A Procedure for High Resolution Satellite Imagery Quality Assessment

    PubMed Central

    Crespi, Mattia; De Vendictis, Laura

    2009-01-01

    Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites. PMID:22412312

  19. A method for the evaluation of image quality according to the recognition effectiveness of objects in the optical remote sensing image using machine learning algorithm.

    PubMed

    Yuan, Tao; Zheng, Xinqi; Hu, Xuan; Zhou, Wei; Wang, Wei

    2014-01-01

    Objective and effective image quality assessment (IQA) is directly related to the application of optical remote sensing images (ORSI). In this study, a new IQA method of standardizing the target object recognition rate (ORR) is presented to reflect quality. First, several quality degradation treatments with high-resolution ORSIs are implemented to model the ORSIs obtained in different imaging conditions; then, a machine learning algorithm is adopted for recognition experiments on a chosen target object to obtain ORRs; finally, a comparison with commonly used IQA indicators was performed to reveal their applicability and limitations. The results showed that the ORR of the original ORSI was calculated to be up to 81.95%, whereas the ORR ratios of the quality-degraded images to the original images were 65.52%, 64.58%, 71.21%, and 73.11%. The results show that these data can more accurately reflect the advantages and disadvantages of different images in object identification and information extraction when compared with conventional digital image assessment indexes. By recognizing the difference in image quality from the application effect perspective, using a machine learning algorithm to extract regional gray scale features of typical objects in the image for analysis, and quantitatively assessing quality of ORSI according to the difference, this method provides a new approach for objective ORSI assessment.

  20. Guest Editorial Image Quality

    NASA Astrophysics Data System (ADS)

    Cheatham, Patrick S.

    1982-02-01

    The term image quality can, unfortunately, apply to anything from a public relations firm's discussion to a comparison between corner drugstores' film processing. If we narrow the discussion to optical systems, we clarify the problem somewhat, but only slightly. We are still faced with a multitude of image quality measures all different, and all couched in different terminology. Optical designers speak of MTF values, digital processors talk about summations of before and after image differences, pattern recognition engineers allude to correlation values, and radar imagers use side-lobe response values measured in decibels. Further complexity is introduced by terms such as information content, bandwidth, Strehl ratios, and, of course, limiting resolution. The problem is to compare these different yardsticks and try to establish some concrete ideas about evaluation of a final image. We need to establish the image attributes which are the most important to perception of the image in question and then begin to apply the different system parameters to those attributes.

  1. Quality Control of Structural MRI Images Applied Using FreeSurfer—A Hands-On Workflow to Rate Motion Artifacts

    PubMed Central

    Backhausen, Lea L.; Herting, Megan M.; Buse, Judith; Roessner, Veit; Smolka, Michael N.; Vetter, Nora C.

    2016-01-01

    In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g., FreeSurfer). This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e., determine participants with most severe artifacts). However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here, we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies. PMID:27999528

  2. Imaging quality analysis of computer-generated holograms using the point-based method and slice-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.

    2017-06-01

    Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.

  3. Monte Carlo simulation of PET/MR scanner and assessment of motion correction strategies

    NASA Astrophysics Data System (ADS)

    Işın, A.; Uzun Ozsahin, D.; Dutta, J.; Haddani, S.; El-Fakhri, G.

    2017-03-01

    Positron Emission Tomography is widely used in three dimensional imaging of metabolic body function and in tumor detection. Important research efforts are made to improve this imaging modality and powerful simulators such as GATE are used to test and develop methods for this purpose. PET requires acquisition time in the order of few minutes. Therefore, because of the natural patient movements such as respiration, the image quality can be adversely affected which drives scientists to develop motion compensation methods to improve the image quality. The goal of this study is to evaluate various image reconstructions methods with GATE simulation of a PET acquisition of the torso area. Obtained results show the need to compensate natural respiratory movements in order to obtain an image with similar quality as the reference image. Improvements are still possible in the applied motion field's extraction algorithms. Finally a statistical analysis should confirm the obtained results.

  4. Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image

    NASA Astrophysics Data System (ADS)

    Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian

    2014-07-01

    To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.

  5. In-situ quality monitoring during laser brazing

    NASA Astrophysics Data System (ADS)

    Ungers, Michael; Fecker, Daniel; Frank, Sascha; Donst, Dmitri; Märgner, Volker; Abels, Peter; Kaierle, Stefan

    Laser brazing of zinc coated steel is a widely established manufacturing process in the automotive sector, where high quality requirements must be fulfilled. The strength, impermeablitiy and surface appearance of the joint are particularly important for judging its quality. The development of an on-line quality control system is highly desired by the industry. This paper presents recent works on the development of such a system, which consists of two cameras operating in different spectral ranges. For the evaluation of the system, seam imperfections are created artificially during experiments. Finally image processing algorithms for monitoring process parameters based the captured images are presented.

  6. ESO imaging survey: optical deep public survey

    NASA Astrophysics Data System (ADS)

    Mignano, A.; Miralles, J.-M.; da Costa, L.; Olsen, L. F.; Prandoni, I.; Arnouts, S.; Benoist, C.; Madejsky, R.; Slijkhuis, R.; Zaggia, S.

    2007-02-01

    This paper presents new five passbands (UBVRI) optical wide-field imaging data accumulated as part of the DEEP Public Survey (DPS) carried out as a public survey by the ESO Imaging Survey (EIS) project. Out of the 3 square degrees originally proposed, the survey covers 2.75 square degrees, in at least one band (normally R), and 1.00 square degrees in five passbands. The median seeing, as measured in the final stacked images, is 0.97 arcsec, ranging from 0.75 arcsec to 2.0 arcsec. The median limiting magnitudes (AB system, 2´´ aperture, 5σ detection limit) are UAB=25.65, BAB=25.54, VAB=25.18, RAB = 24.8 and IAB =24.12 mag, consistent with those proposed in the original survey design. The paper describes the observations and data reduction using the EIS Data Reduction System and its associated EIS/MVM library. The quality of the individual images were inspected, bad images discarded and the remaining used to produce final image stacks in each passband, from which sources have been extracted. Finally, the scientific quality of these final images and associated catalogs was assessed qualitatively by visual inspection and quantitatively by comparison of statistical measures derived from these data with those of other authors as well as model predictions, and from direct comparison with the results obtained from the reduction of the same dataset using an independent (hands-on) software system. Finally to illustrate one application of this survey, the results of a preliminary effort to identify sub-mJy radio sources are reported. To the limiting magnitude reached in the R and I passbands the success rate ranges from 66 to 81% (depending on the fields). These data are publicly available at CDS. Based on observations carried out at the European Southern Observatory, La Silla, Chile under program Nos. 164.O-0561, 169.A-0725, and 267.A-5729. Appendices A, B and C are only available in electronic form at http://www.aanda.org

  7. A whole-heart motion-correction algorithm: Effects on CT image quality and diagnostic accuracy of mechanical valve prosthesis abnormalities.

    PubMed

    Suh, Young Joo; Kim, Young Jin; Kim, Jin Young; Chang, Suyon; Im, Dong Jin; Hong, Yoo Jin; Choi, Byoung Wook

    2017-11-01

    We aimed to determine the effect of a whole-heart motion-correction algorithm (new-generation snapshot freeze, NG SSF) on the image quality of cardiac computed tomography (CT) images in patients with mechanical valve prostheses compared to standard images without motion correction and to compare the diagnostic accuracy of NG SSF and standard CT image sets for the detection of prosthetic valve abnormalities. A total of 20 patients with 32 mechanical valves who underwent wide-coverage detector cardiac CT with single-heartbeat acquisition were included. The CT image quality for subvalvular (below the prosthesis) and valvular regions (valve leaflets) of mechanical valves was assessed by two observers on a four-point scale (1 = poor, 2 = fair, 3 = good, and 4 = excellent). Paired t-tests or Wilcoxon signed rank tests were used to compare image quality scores and the number of diagnostic phases (image quality score≥3) between the standard image sets and NG SSF image sets. Diagnostic performance for detection of prosthetic valve abnormalities was compared between two image sets with the final diagnosis set by re-operation or clinical findings as the standard reference. NG SSF image sets had better image quality scores than standard image sets for both valvular and subvalvular regions (P < 0.05 for both). The number of phases that were of diagnostic image quality per patient was significantly greater in the NG SSF image set than standard image set for both valvular and subvalvular regions (P < 0.0001). Diagnostic performance of NG SSF image sets for the detection of prosthetic abnormalities (20 pannus and two paravalvular leaks) was greater than that of standard image sets (P < 0.05). Application of NG SSF can improve CT image quality and diagnostic accuracy in patients with mechanical valves compared to standard images. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  8. Effect of masking phase-only holograms on the quality of reconstructed images.

    PubMed

    Deng, Yuanbo; Chu, Daping

    2016-04-20

    A phase-only hologram modulates the phase of the incident light and diffracts it efficiently with low energy loss because of the minimum absorption. Much research attention has been focused on how to generate phase-only holograms, and little work has been done to understand the effect and limitation of their partial implementation, possibly due to physical defects and constraints, in particular as in the practical situations where a phase-only hologram is confined or needs to be sliced or tiled. The present study simulates the effect of masking phase-only holograms on the quality of reconstructed images in three different scenarios with different filling factors, filling positions, and illumination intensity profiles. Quantitative analysis confirms that the width of the image point spread function becomes wider and the image quality decreases, as expected, when the filling factor decreases, and the image quality remains the same for different filling positions as well. The width of the image point spread function as derived from different filling factors shows a consistent behavior to that as measured directly from the reconstructed image, especially as the filling factor becomes small. Finally, mask profiles of different shapes and intensity distributions are shown to have more complicated effects on the image point spread function, which in turn affects the quality and textures of the reconstructed image.

  9. Medical Image Processing Server applied to Quality Control of Nuclear Medicine.

    NASA Astrophysics Data System (ADS)

    Vergara, C.; Graffigna, J. P.; Marino, E.; Omati, S.; Holleywell, P.

    2016-04-01

    This paper is framed within the area of medical image processing and aims to present the process of installation, configuration and implementation of a processing server of medical images (MIPS) in the Fundación Escuela de Medicina Nuclear located in Mendoza, Argentina (FUESMEN). It has been developed in the Gabinete de Tecnologia Médica (GA.TE.ME), Facultad de Ingeniería-Universidad Nacional de San Juan. MIPS is a software that using the DICOM standard, can receive medical imaging studies of different modalities or viewing stations, then it executes algorithms and finally returns the results to other devices. To achieve the objectives previously mentioned, preliminary tests were conducted in the laboratory. More over, tools were remotely installed in clinical enviroment. The appropiate protocols for setting up and using them in different services were established once defined those suitable algorithms. Finally, it’s important to focus on the implementation and training that is provided in FUESMEN, using nuclear medicine quality control processes. Results on implementation are exposed in this work.

  10. Multiresolution generalized N dimension PCA for ultrasound image denoising

    PubMed Central

    2014-01-01

    Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917

  11. Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2018-04-01

    This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.

  12. A method to incorporate the effect of beam quality on image noise in a digitally reconstructed radiograph (DRR) based computer simulation for optimisation of digital radiography

    NASA Astrophysics Data System (ADS)

    Moore, Craig S.; Wood, Tim J.; Saunderson, John R.; Beavis, Andrew W.

    2017-09-01

    The use of computer simulated digital x-radiographs for optimisation purposes has become widespread in recent years. To make these optimisation investigations effective, it is vital simulated radiographs contain accurate anatomical and system noise. Computer algorithms that simulate radiographs based solely on the incident detector x-ray intensity (‘dose’) have been reported extensively in the literature. However, while it has been established for digital mammography that x-ray beam quality is an important factor when modelling noise in simulated images there are no such studies for diagnostic imaging of the chest, abdomen and pelvis. This study investigates the influence of beam quality on image noise in a digital radiography (DR) imaging system, and incorporates these effects into a digitally reconstructed radiograph (DRR) computer simulator. Image noise was measured on a real DR imaging system as a function of dose (absorbed energy) over a range of clinically relevant beam qualities. Simulated ‘absorbed energy’ and ‘beam quality’ DRRs were then created for each patient and tube voltage under investigation. Simulated noise images, corrected for dose and beam quality, were subsequently produced from the absorbed energy and beam quality DRRs, using the measured noise, absorbed energy and beam quality relationships. The noise images were superimposed onto the noiseless absorbed energy DRRs to create the final images. Signal-to-noise measurements in simulated chest, abdomen and spine images were within 10% of the corresponding measurements in real images. This compares favourably to our previous algorithm where images corrected for dose only were all within 20%.

  13. MO-DE-207-04: Imaging educational program on solutions to common pediatric imaging challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamurthy, R.

    This imaging educational program will focus on solutions to common pediatric imaging challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. The educational program will begin with a detailed discussion of the optimal configuration of fluoroscopes for general pediatric procedures. Following this introduction will be a focused discussion on the utility of Dual Energy CT for imaging children. The third lecture will address the substantial challenge of obtaining consistent image post -processing in pediatric digital radiography. The fourth and final lecture will address best practices in pediatric MRI includingmore » a discussion of ancillary methods to reduce sedation and anesthesia rates. Learning Objectives: To learn techniques for optimizing radiation dose and image quality in pediatric fluoroscopy To become familiar with the unique challenges and applications of Dual Energy CT in pediatric imaging To learn solutions for consistent post-processing quality in pediatric digital radiography To understand the key components of an effective MRI safety and quality program for the pediatric practice.« less

  14. Astronomical Instrumentation Systems Quality Management Planning: AISQMP

    NASA Astrophysics Data System (ADS)

    Goldbaum, Jesse

    2017-06-01

    The capability of small aperture astronomical instrumentation systems (AIS) to make meaningful scientific contributions has never been better. The purpose of AIS quality management planning (AISQMP) is to ensure the quality of these contributions such that they are both valid and reliable. The first step involved with AISQMP is to specify objective quality measures not just for the AIS final product, but also for the instrumentation used in its production. The next step is to set up a process to track these measures and control for any unwanted variation. The final step is continual effort applied to reducing variation and obtaining measured values near optimal theoretical performance. This paper provides an overview of AISQMP while focusing on objective quality measures applied to astronomical imaging systems.

  15. Astronomical Instrumentation Systems Quality Management Planning: AISQMP (Abstract)

    NASA Astrophysics Data System (ADS)

    Goldbaum, J.

    2017-12-01

    (Abstract only) The capability of small aperture astronomical instrumentation systems (AIS) to make meaningful scientific contributions has never been better. The purpose of AIS quality management planning (AISQMP) is to ensure the quality of these contributions such that they are both valid and reliable. The first step involved with AISQMP is to specify objective quality measures not just for the AIS final product, but also for the instrumentation used in its production. The next step is to set up a process to track these measures and control for any unwanted variation. The final step is continual effort applied to reducing variation and obtaining measured values near optimal theoretical performance. This paper provides an overview of AISQMP while focusing on objective quality measures applied to astronomical imaging systems.

  16. Image sharpness assessment based on wavelet energy of edge area

    NASA Astrophysics Data System (ADS)

    Li, Jin; Zhang, Hong; Zhang, Lei; Yang, Yifan; He, Lei; Sun, Mingui

    2018-04-01

    Image quality assessment is needed in multiple image processing areas and blur is one of the key reasons of image deterioration. Although great full-reference image quality assessment metrics have been proposed in the past few years, no-reference method is still an area of current research. Facing this problem, this paper proposes a no-reference sharpness assessment method based on wavelet transformation which focuses on the edge area of image. Based on two simple characteristics of human vision system, weights are introduced to calculate weighted log-energy of each wavelet sub band. The final score is given by the ratio of high-frequency energy to the total energy. The algorithm is tested on multiple databases. Comparing with several state-of-the-art metrics, proposed algorithm has better performance and less runtime consumption.

  17. Diffuse prior monotonic likelihood ratio test for evaluation of fused image quality measures.

    PubMed

    Wei, Chuanming; Kaplan, Lance M; Burks, Stephen D; Blum, Rick S

    2011-02-01

    This paper introduces a novel method to score how well proposed fused image quality measures (FIQMs) indicate the effectiveness of humans to detect targets in fused imagery. The human detection performance is measured via human perception experiments. A good FIQM should relate to perception results in a monotonic fashion. The method computes a new diffuse prior monotonic likelihood ratio (DPMLR) to facilitate the comparison of the H(1) hypothesis that the intrinsic human detection performance is related to the FIQM via a monotonic function against the null hypothesis that the detection and image quality relationship is random. The paper discusses many interesting properties of the DPMLR and demonstrates the effectiveness of the DPMLR test via Monte Carlo simulations. Finally, the DPMLR is used to score FIQMs with test cases considering over 35 scenes and various image fusion algorithms.

  18. Concave omnidirectional imaging device for cylindrical object based on catadioptric panoramic imaging

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojun; Wu, Yumei; Wen, Peizhi

    2018-03-01

    To obtain information on the outer surface of a cylinder object, we propose a catadioptric panoramic imaging system based on the principle of uniform spatial resolution for vertical scenes. First, the influence of the projection-equation coefficients on the spatial resolution and astigmatism of the panoramic system are discussed, respectively. Through parameter optimization, we obtain the appropriate coefficients for the projection equation, and so the imaging quality of the entire imaging system can reach an optimum value. Finally, the system projection equation is calibrated, and an undistorted rectangular panoramic image is obtained using the cylindrical-surface projection expansion method. The proposed 360-deg panoramic-imaging device overcomes the shortcomings of existing surface panoramic-imaging methods, and it has the advantages of low cost, simple structure, high imaging quality, and small distortion, etc. The experimental results show the effectiveness of the proposed method.

  19. New segmentation-based tone mapping algorithm for high dynamic range image

    NASA Astrophysics Data System (ADS)

    Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong

    2017-07-01

    The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.

  20. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  1. High Contrast Ultrafast Imaging of the Human Heart

    PubMed Central

    Papadacci, Clement; Pernot, Mathieu; Couade, Mathieu; Fink, Mathias; Tanter, Mickael

    2014-01-01

    Non-invasive ultrafast imaging for human cardiac applications is a big challenge to image intrinsic waves such as electromechanical waves or remotely induced shear waves in elastography imaging techniques. In this paper we propose to perform ultrafast imaging of the heart with adapted sector size by using diverging waves emitted from a classical transthoracic cardiac phased array probe. As in ultrafast imaging with plane wave coherent compounding, diverging waves can be summed coherently to obtain high-quality images of the entire heart at high frame rate in a full field-of-view. To image shear waves propagation at high SNR, the field-of-view can be adapted by changing the angular aperture of the transmitted wave. Backscattered echoes from successive circular wave acquisitions are coherently summed at every location in the image to improve the image quality while maintaining very high frame rates. The transmitted diverging waves, angular apertures and subapertures size are tested in simulation and ultrafast coherent compounding is implemented on a commercial scanner. The improvement of the imaging quality is quantified in phantom and in vivo on human heart. Imaging shear wave propagation at 2500 frame/s using 5 diverging waves provides a strong increase of the Signal to noise ratio of the tissue velocity estimates while maintaining a high frame rate. Finally, ultrafast imaging with a 1 to 5 diverging waves is used to image the human heart at a frame rate of 900 frames/s over an entire cardiac cycle. Thanks to spatial coherent compounding, a strong improvement of imaging quality is obtained with a small number of transmitted diverging waves and a high frame rate, which allows imaging the propagation of electromechanical and shear waves with good image quality. PMID:24474135

  2. Image transfer by cascaded stack of photonic crystal and air layers.

    PubMed

    Shen, C; Michielsen, K; De Raedt, H

    2006-01-23

    We demonstrate image transfer by a cascaded stack consisting of two and three triangular-lattice photonic crystal slabs separated by air. The quality of the image transfered by the stack is sensitive to the air/photonic crystal interface termination and the frequency. Depending on the frequency and the surface termination, the image can be transfered by the stack with very little deterioration of the resolution, that is the resolution of the final image is approximately the same as the resolution of the image formed behind one single photonic crystal slab.

  3. Fast high resolution reconstruction in multi-slice and multi-view cMRI

    NASA Astrophysics Data System (ADS)

    Velasco Toledo, Nelson; Romero Castro, Eduardo

    2015-01-01

    Cardiac magnetic resonance imaging (cMRI) is an useful tool in diagnosis, prognosis and research since it functionally tracks the heart structure. Although useful, this imaging technique is limited in spatial resolution because heart is a constant moving organ, also there are other non controled conditions such as patient movements and volumetric changes during apnea periods when data is acquired, those conditions limit the time to capture high quality information. This paper presents a very fast and simple strategy to reconstruct high resolution 3D images from a set of low resolution series of 2D images. The strategy is based on an information reallocation algorithm which uses the DICOM header to relocate voxel intensities in a regular grid. An interpolation method is applied to fill empty places with estimated data, the interpolation resamples the low resolution information to estimate the missing information. As a final step a gaussian filter that denoises the final result. A reconstructed image evaluation is performed using as a reference a super-resolution reconstructed image. The evaluation reveals that the method maintains the general heart structure with a small loss in detailed information (edge sharpening and blurring), some artifacts related with input information quality are detected. The proposed method requires low time and computational resources.

  4. Application of Oversampling to obtain the MTF of Digital Radiology Equipment.

    NASA Astrophysics Data System (ADS)

    Narváez, M.; Graffigna, J. P.; Gómez, M. E.; Romo, R.

    2016-04-01

    Within the objectives of theproject Medical Image Processing for QualityAssessment ofX Ray Imaging, the present research work is aimed at developinga phantomX ray image and itsassociated processing algorithms in order to evaluatethe image quality rendered by digital X ray equipment. These tools are used to measure various image parameters, among which spatial resolution shows afundamental property that can be characterized by the Modulation Transfer Function (MTF)of an imaging system [1]. After performing a thorough literature surveyon imaging quality control in digital X film in Argentine and international publications, it was decided to adopt for this work the Norm IEC 62220 1:2003 that recommends using an image edge as a testingmethod. In order to obtain the characterizing MTF, a protocol was designedfor unifying the conditions under which the images are acquired for later evaluation. The protocol implied acquiring a radiography image by means of a specific referential technique, i.e. referred either to voltage, current, time, distance focus plate (/film?) distance, or other referential parameter, and to interpret the image through a system of computed radiology or direct digital radiology. The contribution of the work stems from the fact that, even though the traditional way of evaluating an X film image quality has relied mostly on subjective methods, this work presents an objective evaluative toolfor the images obtained with a givenequipment, followed by a contrastive analysis with the renderings from other X filmimaging sets.Once the images were obtained, specific calculations were carried out. Though there exist some methods based on the subjective evaluation of the quality of image, this work offers an objective evaluation of the equipment under study. Finally, we present the results obtained on different equipment.

  5. Formation of the image on the receiver of thermal radiation

    NASA Astrophysics Data System (ADS)

    Akimenko, Tatiana A.

    2018-04-01

    The formation of the thermal picture of the observed scene with the verification of the quality of the thermal images obtained is one of the important stages of the technological process that determine the quality of the thermal imaging observation system. In this article propose to consider a model for the formation of a thermal picture of a scene, which must take into account: the features of the object of observation as the source of the signal; signal transmission through the physical elements of the thermal imaging system that produce signal processing at the optical, photoelectronic and electronic stages, which determines the final parameters of the signal and its compliance with the requirements for thermal information and measurement systems.

  6. Independent transmission of sign language interpreter in DVB: assessment of image compression

    NASA Astrophysics Data System (ADS)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  7. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  8. High-contrast multilayer imaging of biological organisms through dark-field digital refocusing.

    PubMed

    Faridian, Ahmad; Pedrini, Giancarlo; Osten, Wolfgang

    2013-08-01

    We have developed an imaging system to extract high contrast images from different layers of biological organisms. Utilizing a digital holographic approach, the system works without scanning through layers of the specimen. In dark-field illumination, scattered light has the main contribution in image formation, but in the case of coherent illumination, this creates a strong speckle noise that reduces the image quality. To remove this restriction, the specimen has been illuminated with various speckle-fields and a hologram has been recorded for each speckle-field. Each hologram has been analyzed separately and the corresponding intensity image has been reconstructed. The final image has been derived by averaging over the reconstructed images. A correlation approach has been utilized to determine the number of speckle-fields required to achieve a desired contrast and image quality. The reconstructed intensity images in different object layers are shown for different sea urchin larvae. Two multimedia files are attached to illustrate the process of digital focusing.

  9. Fast and efficient molecule detection in localization-based super-resolution microscopy by parallel adaptive histogram equalization.

    PubMed

    Li, Yiming; Ishitsuka, Yuji; Hedde, Per Niklas; Nienhaus, G Ulrich

    2013-06-25

    In localization-based super-resolution microscopy, individual fluorescent markers are stochastically photoactivated and subsequently localized within a series of camera frames, yielding a final image with a resolution far beyond the diffraction limit. Yet, before localization can be performed, the subregions within the frames where the individual molecules are present have to be identified-oftentimes in the presence of high background. In this work, we address the importance of reliable molecule identification for the quality of the final reconstructed super-resolution image. We present a fast and robust algorithm (a-livePALM) that vastly improves the molecule detection efficiency while minimizing false assignments that can lead to image artifacts.

  10. Radiopharmaceutical considerations for using Tc-99m MAA in lung transplant patients.

    PubMed

    Ponto, James A

    2010-01-01

    To elucidate radiopharmaceutical considerations for using technetium Tc-99m albumin aggregated (Tc-99m MAA) in lung transplant patients and to establish an appropriate routine dose and preparation procedure. Tertiary care academic hospital during May 2007 to May 2009. Nuclear pharmacist working in nuclear medicine department. Radiopharmaceutical considerations deemed important for the use of Tc-99m MAA in lung transplant patients included radioactivity dose, particulate dose, rate of the radiolabeling reaction (preparation time), and final radiochemical purity. Evaluation of our initial 12-month experience, published literature, and professional practice guidelines provided the basis for establishing an appropriate dose and preparation procedure of Tc-99m MAA for use in lung transplant patients. Radiochemical purity at typical incubation times and image quality in subsequent lung transplant patients imaged during the next 12 months. Based on considerations of radioactivity dose, particulate dose, rate of the radiolabeling reaction (preparation time), and final radiochemical purity, a routine dose consisting of 3 mCi (111 MBq) and 100,000 particles of Tc-99m MAA for planar perfusion lung imaging of adult lung transplant patients was established as reasonable and appropriate. MAA kits were prepared with a more reasonable amount of Tc-99m and yielded high radiochemical purity values in typical incubation times. Images have continued to be of high diagnostic quality. Tc-99m MAA used for lung transplant imaging can be readily prepared with high radiochemical purity to provide a dose of 3 mCi (111 GBq)/100,000 particles, which provides images of high diagnostic quality.

  11. Application of machine learning for the evaluation of turfgrass plots using aerial images

    NASA Astrophysics Data System (ADS)

    Ding, Ke; Raheja, Amar; Bhandari, Subodh; Green, Robert L.

    2016-05-01

    Historically, investigation of turfgrass characteristics have been limited to visual ratings. Although relevant information may result from such evaluations, final inferences may be questionable because of the subjective nature in which the data is collected. Recent advances in computer vision techniques allow researchers to objectively measure turfgrass characteristics such as percent ground cover, turf color, and turf quality from the digital images. This paper focuses on developing a methodology for automated assessment of turfgrass quality from aerial images. Images of several turfgrass plots of varying quality were gathered using a camera mounted on an unmanned aerial vehicle. The quality of these plots were also evaluated based on visual ratings. The goal was to use the aerial images to generate quality evaluations on a regular basis for the optimization of water treatment. Aerial images are used to train a neural network so that appropriate features such as intensity, color, and texture of the turfgrass are extracted from these images. Neural network is a nonlinear classifier commonly used in machine learning. The output of the neural network trained model is the ratings of the grass, which is compared to the visual ratings. Currently, the quality and the color of turfgrass, measured as the greenness of the grass, are evaluated. The textures are calculated using the Gabor filter and co-occurrence matrix. Other classifiers such as support vector machines and simpler linear regression models such as Ridge regression and LARS regression are also used. The performance of each model is compared. The results show encouraging potential for using machine learning techniques for the evaluation of turfgrass quality and color.

  12. Quantitative evaluation of 3D images produced from computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Sheerin, David T.; Mason, Ian R.; Cameron, Colin D.; Payne, Douglas A.; Slinger, Christopher W.

    1999-08-01

    Advances in computing and optical modulation techniques now make it possible to anticipate the generation of near real- time, reconfigurable, high quality, three-dimensional images using holographic methods. Computer generated holography (CGH) is the only technique which holds promise of producing synthetic images having the full range of visual depth cues. These realistic images will be viewable by several users simultaneously, without the need for headtracking or special glasses. Such a data visualization tool will be key to speeding up the manufacture of new commercial and military equipment by negating the need for the production of physical 3D models in the design phase. DERA Malvern has been involved in designing and testing fixed CGH in order to understand the connection between the complexity of the CGH, the algorithms used to design them, the processes employed in their implementation and the quality of the images produced. This poster describes results from CGH containing up to 108 pixels. The methods used to evaluate the reconstructed images are discussed and quantitative measures of image fidelity made. An understanding of the effect of the various system parameters upon final image quality enables a study of the possible system trade-offs to be carried out. Such an understanding of CGH production and resulting image quality is key to effective implementation of a reconfigurable CGH system currently under development at DERA.

  13. Chromaticity based smoke removal in endoscopic images

    NASA Astrophysics Data System (ADS)

    Tchaka, Kevin; Pawar, Vijay M.; Stoyanov, Danail

    2017-02-01

    In minimally invasive surgery, image quality is a critical pre-requisite to ensure a surgeons ability to perform a procedure. In endoscopic procedures, image quality can deteriorate for a number of reasons such as fogging due to the temperature gradient after intra-corporeal insertion, lack of focus and due to smoke generated when using electro-cautery to dissect tissues without bleeding. In this paper we investigate the use of vision processing techniques to remove surgical smoke and improve the clarity of the image. We model the image formation process by introducing a haze medium to account for the degradation of visibility. For simplicity and computational efficiency we use an adapted dark-channel prior method combined with histogram equalization to remove smoke artifacts to recover the radiance image and enhance the contrast and brightness of the final result. Our initial results on images from robotic assisted procedures are promising and show that the proposed approach may be used to enhance image quality during surgery without additional suction devices. In addition, the processing pipeline may be used as an important part of a robust surgical vision pipeline that can continue working in the presence of smoke.

  14. No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.

    PubMed

    Liu, Tsung-Jung; Liu, Kuan-Hsien

    2018-03-01

    A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.

  15. Retinex based low-light image enhancement using guided filtering and variational framework

    NASA Astrophysics Data System (ADS)

    Zhang, Shi; Tang, Gui-jin; Liu, Xiao-hua; Luo, Su-huai; Wang, Da-dong

    2018-03-01

    A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization (CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.

  16. Infrared thermal imaging figures of merit

    NASA Technical Reports Server (NTRS)

    Kaplan, Herbert

    1989-01-01

    Commercially available types of infrared thermal imaging instruments, both viewers (qualitative) and imagers (quantitative) are discussed. The various scanning methods by which thermal images (thermograms) are generated will be reviewed. The performance parameters (figures of merit) that define the quality of performance of infrared radiation thermometers will be introduced. A discussion of how these parameters are extended and adapted to define the performance of thermal imaging instruments will be provided. Finally, the significance of each of the key performance parameters of thermal imaging instruments will be reviewed and procedures currently used for testing to verify performance will be outlined.

  17. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  18. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun

    2015-04-01

    Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.

  19. [Wireless digital radiography detectors in the emergency area: an efficacious solution].

    PubMed

    Garrido Blázquez, M; Agulla Otero, M; Rodríguez Recio, F J; Torres Cabrera, R; Hernando González, I

    2013-01-01

    To evaluate the implementation of a flat panel digital radiolography (DR) system with WiFi technology in an emergency radiology area in which a computed radiography (CR) system was previously used. We analyzed aspects related to image quality, radiation dose, workflow, and ergonomics. We analyzed the results obtained with the CR and WiFi DR systems related with the quality of images analyzed in images obtained using a phantom and after radiologists' evaluation of radiological images obtained in real patients. We also analyzed the time required for image acquisition and the workflow with the two technological systems. Finally, we analyzed the data related to the dose of radiation in patients before and after the implementation of the new equipment. Image quality improved in both the tests carried out with a phantom and in radiological images obtained in patients, which increased from 3 to 4.5 on a 5-point scale. The average time required for image acquisition decreased by 25 seconds per image. The flat panel required less radiation to be delivered in practically all the techniques carried out using automatic dosimetry, although statistically significant differences were found in only some of the techniques (chest, thoracic spine, and lumbar spine). Implementing the WiFi DR system has brought benefits. Image quality has improved and the dose of radiation to patients has decreased. The new system also has advantages in terms of functionality, ergonomics, and performance. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.

  20. Prestack depth migration for complex 2D structure using phase-screen propagators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, P.; Huang, Lian-Jie; Burch, C.

    1997-11-01

    We present results for the phase-screen propagator method applied to prestack depth migration of the Marmousi synthetic data set. The data were migrated as individual common-shot records and the resulting partial images were superposed to obtain the final complete Image. Tests were performed to determine the minimum number of frequency components required to achieve the best quality image and this in turn provided estimates of the minimum computing time. Running on a single processor SUN SPARC Ultra I, high quality images were obtained in as little as 8.7 CPU hours and adequate images were obtained in as little as 4.4more » CPU hours. Different methods were tested for choosing the reference velocity used for the background phase-shift operation and for defining the slowness perturbation screens. Although the depths of some of the steeply dipping, high-contrast features were shifted slightly the overall image quality was fairly insensitive to the choice of the reference velocity. Our jests show the phase-screen method to be a reliable and fast algorithm for imaging complex geologic structures, at least for complex 2D synthetic data where the velocity model is known.« less

  1. TH-C-18A-06: Combined CT Image Quality and Radiation Dose Monitoring Program Based On Patient Data to Assess Consistency of Clinical Imaging Across Scanner Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christianson, O; Winslow, J; Samei, E

    2014-06-15

    Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using opticalmore » character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image quality across CT vendors.« less

  2. 3D printing X-Ray Quality Control Phantoms. A Low Contrast Paradigm

    NASA Astrophysics Data System (ADS)

    Kapetanakis, I.; Fountos, G.; Michail, C.; Valais, I.; Kalyvas, N.

    2017-11-01

    Current 3D printing technology products may be usable in various biomedical applications. Such an application is the creation of X-ray quality control phantoms. In this work a self-assembled 3D printer (geeetech i3) was used for the design of a simple low contrast phantom. The printing material was Polylactic Acid (PLA) (100% printing density). Low contrast scheme was achieved by creating air-holes with different diameters and thicknesses, ranging from 1mm to 9mm. The phantom was irradiated at a Philips Diagnost 93 fluoroscopic installation at 40kV-70kV with the semi-automatic mode. The images were recorded with an Agfa cr30-x CR system and assessed with ImageJ software. The best contrast value observed was approximately 33%. In low contrast detectability check it was found that the 1mm diameter hole was always visible, for thickness larger or equal to 4mm. A reason for not being able to distinguish 1mm in smaller thicknesses might be the presence of printing patterns on the final image, which increased the structure noise. In conclusion the construction of a contrast resolution phantom with a 3D printer is feasible. The quality of the final product depends upon the printer accuracy and the material characteristics.

  3. Study on polarization image methods in turbid medium

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong

    2014-11-01

    Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.

  4. Fast subsurface fingerprint imaging with full-field optical coherence tomography system equipped with a silicon camera

    NASA Astrophysics Data System (ADS)

    Auksorius, Egidijus; Boccara, A. Claude

    2017-09-01

    Images recorded below the surface of a finger can have more details and be of higher quality than the conventional surface fingerprint images. This is particularly true when the quality of the surface fingerprints is compromised by, for example, moisture or surface damage. However, there is an unmet need for an inexpensive fingerprint sensor that is able to acquire high-quality images deep below the surface in short time. To this end, we report on a cost-effective full-field optical coherent tomography system comprised of a silicon camera and a powerful near-infrared LED light source. The system, for example, is able to record 1.7 cm×1.7 cm en face images in 0.12 s with the spatial sampling rate of 2116 dots per inch and the sensitivity of 93 dB. We show that the system can be used to image internal fingerprints and sweat ducts with good contrast. Finally, to demonstrate its biometric performance, we acquired subsurface fingerprint images from 240 individual fingers and estimated the equal-error-rate to be ˜0.8%. The developed instrument could also be used in other en face deep-tissue imaging applications because of its high sensitivity, such as in vivo skin imaging.

  5. Gravity packaging final waste recovery based on gravity separation and chemical imaging control.

    PubMed

    Bonifazi, Giuseppe; Serranti, Silvia; Potenza, Fabio; Luciani, Valentina; Di Maio, Francesco

    2017-02-01

    Plastic polymers are characterized by a high calorific value. Post-consumer plastic waste can be thus considered, in many cases, as a typical secondary solid fuels according to the European Commission directive on End of Waste (EoW). In Europe the practice of incineration is considered one of the solutions for waste disposal waste, for energy recovery and, as a consequence, for the reduction of waste sent to landfill. A full characterization of these products represents the first step to profitably and correctly utilize them. Several techniques have been investigated in this paper in order to separate and characterize post-consumer plastic packaging waste fulfilling the previous goals, that is: gravity separation (i.e. Reflux Classifier), FT-IR spectroscopy, NIR HyperSpectralImaging (HSI) based techniques and calorimetric test. The study demonstrated as the proposed separation technique and the HyperSpectral NIR Imaging approach allow to separate and recognize the different polymers (i.e. PolyVinyl Chloride (PVC), PolyStyrene (PS), PolyEthylene (PE), PoliEtilene Tereftalato (PET), PolyPropylene (PP)) in order to maximize the removal of the PVC fraction from plastic waste and to perform the full quality control of the resulting products, can be profitably utilized to set up analytical/control strategies finalized to obtain a low content of PVC in the final Solid Recovered Fuel (SRF), thus enhancing SRF quality, increasing its value and reducing the "final waste". Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. DSM Generation from ALSO/PRISM Images Using SAT-PP

    NASA Astrophysics Data System (ADS)

    Wolff, Kirsten; Gruen, Armin

    2008-11-01

    One of the most important products of ALOS/PRISM image data are accurate DSMs. To exploit the full potential of the full resolution of PRISM for DSM generation, a highly developed image matcher is needed. As a member of the validation and calibration team for PRISM we published earlier results of DSM generation using PRISM image triplets in combination with our software package SAT-PP. The overall accuracy across all object and image features for all tests lies between 1-5 pixels in matching, depending primarily on surface roughness, vegetation, image texture and image quality. Here we will discuss some new results. We focus on four different topics: the use of two different evaluation methods, the difference between a 5m and a 10m GSD for the final PRISM DSM, the influence of the level of initial information and the comparison of the quality of different combinations of the three different views forward, nadir and backward. All tests have been conducted with our testfield Bern/Thun, Switzerland.

  7. WE-G-209-03: PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kemp, B.

    2016-06-15

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  8. WE-G-209-02: CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kofler, J.

    2016-06-15

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  9. WE-G-209-04: MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pooley, R.

    2016-06-15

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  10. Nondestructive Detection of the Internalquality of Apple Using X-Ray and Machine Vision

    NASA Astrophysics Data System (ADS)

    Yang, Fuzeng; Yang, Liangliang; Yang, Qing; Kang, Likui

    The internal quality of apple is impossible to be detected by eyes in the procedure of sorting, which could reduce the apple’s quality reaching market. This paper illustrates an instrument using X-ray and machine vision. The following steps were introduced to process the X-ray image in order to determine the mould core apple. Firstly, lifting wavelet transform was used to get a low frequency image and three high frequency images. Secondly, we enhanced the low frequency image through image’s histogram equalization. Then, the edge of each apple's image was detected using canny operator. Finally, a threshold was set to clarify mould core and normal apple according to the different length of the apple core’s diameter. The experimental results show that this method could on-line detect the mould core apple with less time consuming, less than 0.03 seconds per apple, and the accuracy could reach 92%.

  11. Research on Wide-field Imaging Technologies for Low-frequency Radio Array

    NASA Astrophysics Data System (ADS)

    Lao, B. Q.; An, T.; Chen, X.; Wu, X. C.; Lu, Y.

    2017-09-01

    Wide-field imaging of low-frequency radio telescopes are subject to a number of difficult problems. One particularly pernicious problem is the non-coplanar baseline effect. It will lead to distortion of the final image when the phase of w direction called w-term is ignored. The image degradation effects are amplified for telescopes with the wide field of view. This paper summarizes and analyzes several w-term correction methods and their technical principles. Their advantages and disadvantages have been analyzed after comparing their computational cost and computational complexity. We conduct simulations with two of these methods, faceting and w-projection, based on the configuration of the first-phase Square Kilometre Array (SKA) low frequency array. The resulted images are also compared with the two-dimensional Fourier transform method. The results show that image quality and correctness derived from both faceting and w-projection are better than the two-dimensional Fourier transform method in wide-field imaging. The image quality and run time affected by the number of facets and w steps have been evaluated. The results indicate that the number of facets and w steps must be reasonable. Finally, we analyze the effect of data size on the run time of faceting and w-projection. The results show that faceting and w-projection need to be optimized before the massive amounts of data processing. The research of the present paper initiates the analysis of wide-field imaging techniques and their application in the existing and future low-frequency array, and fosters the application and promotion to much broader fields.

  12. Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.

    PubMed

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya

    2013-03-01

    We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.

  13. Cameras and settings for optimal image capture from UAVs

    NASA Astrophysics Data System (ADS)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  14. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  15. Distributed ISAR Subimage Fusion of Nonuniform Rotating Target Based on Matching Fourier Transform.

    PubMed

    Li, Yuanyuan; Fu, Yaowen; Zhang, Wenpeng

    2018-06-04

    In real applications, the image quality of the conventional monostatic Inverse Synthetic Aperture Radar (ISAR) for the maneuvering target is subject to the strong fluctuation of Radar Cross Section (RCS), as the target aspect varies enormously. Meanwhile, the maneuvering target introduces nonuniform rotation after translation motion compensation which degrades the imaging performance of the conventional Fourier Transform (FT)-based method in the cross-range dimension. In this paper, a method which combines the distributed ISAR technique and the Matching Fourier Transform (MFT) is proposed to overcome these problems. Firstly, according to the characteristics of the distributed ISAR, the multiple channel echoes of the nonuniform rotation target from different observation angles can be acquired. Then, by applying the MFT to the echo of each channel, the defocused problem of nonuniform rotation target which is inevitable by using the FT-based imaging method can be avoided. Finally, after preprocessing, scaling and rotation of all subimages, the noncoherent fusion image containing all the RCS information in all channels can be obtained. The accumulation coefficients of all subimages are calculated adaptively according to the their image qualities. Simulation and experimental data are used to validate the effectiveness of the proposed approach, and fusion image with improved recognizability can be obtained. Therefore, by using the distributed ISAR technique and MFT, subimages of high-maneuvering target from different observation angles can be obtained. Meanwhile, by employing the adaptive subimage fusion method, the RCS fluctuation can be alleviated and more recognizable final image can be obtained.

  16. Applications of hyperspectral imaging in chicken meat safety and quality detection and evaluation: a review.

    PubMed

    Xiong, Zhenjie; Xie, Anguo; Sun, Da-Wen; Zeng, Xin-An; Liu, Dan

    2015-01-01

    Currently, the issue of food safety and quality is a great public concern. In order to satisfy the demands of consumers and obtain superior food qualities, non-destructive and fast methods are required for quality evaluation. As one of these methods, hyperspectral imaging (HSI) technique has emerged as a smart and promising analytical tool for quality evaluation purposes and has attracted much interest in non-destructive analysis of different food products. With the main advantage of combining both spectroscopy technique and imaging technique, HSI technique shows a convinced attitude to detect and evaluate chicken meat quality objectively. Moreover, developing a quality evaluation system based on HSI technology would bring economic benefits to the chicken meat industry. Therefore, in recent years, many studies have been conducted on using HSI technology for the safety and quality detection and evaluation of chicken meat. The aim of this review is thus to give a detailed overview about HSI and focus on the recently developed methods exerted in HSI technology developed for microbiological spoilage detection and quality classification of chicken meat. Moreover, the usefulness of HSI technique for detecting fecal contamination and bone fragments of chicken carcasses are presented. Finally, some viewpoints on its future research and applicability in the modern poultry industry are proposed.

  17. Enhance the Quality of Crowdsensing for Fine-Grained Urban Environment Monitoring via Data Correlation

    PubMed Central

    Kang, Xu; Liu, Liang; Ma, Huadong

    2017-01-01

    Monitoring the status of urban environments, which provides fundamental information for a city, yields crucial insights into various fields of urban research. Recently, with the popularity of smartphones and vehicles equipped with onboard sensors, a people-centric scheme, namely “crowdsensing”, for city-scale environment monitoring is emerging. This paper proposes a data correlation based crowdsensing approach for fine-grained urban environment monitoring. To demonstrate urban status, we generate sensing images via crowdsensing network, and then enhance the quality of sensing images via data correlation. Specifically, to achieve a higher quality of sensing images, we not only utilize temporal correlation of mobile sensing nodes but also fuse the sensory data with correlated environment data by introducing a collective tensor decomposition approach. Finally, we conduct a series of numerical simulations and a real dataset based case study. The results validate that our approach outperforms the traditional spatial interpolation-based method. PMID:28054968

  18. Memory preservation made prestigious but easy

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Debus, Christina; Sandhaus, Philipp

    2011-01-01

    Preserving memories combined with story-telling using either photo books for multiple images or high quality products such as one or a few images printed on canvas or images mounted on acryl to create high-quality wall decorations are gradually becoming more popular than classical 4*6 prints and classical silver halide posters. Digital printing via electro photography and ink jet is increasingly replacing classical silver halide technology as the dominant production technology for these kinds of products. Maintaining a consistent and comparable quality of output is becoming more challenging than using silver halide paper for both, prints and posters. This paper describes a unique approach of combining both desktop based software to initiate a compelling project and the use of online capabilities in order to finalize and optimize that project in an online environment in a community process. A comparison of the consumer behavior between online and desktop based solutions for generating photo books will be presented.

  19. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  20. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  1. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  2. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  3. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  4. Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens.

    PubMed

    Bueno, Juan M; Skorsetz, Martin; Bonora, Stefano; Artal, Pablo

    2018-05-28

    A multi-actuator adaptive lens (AL) was incorporated into a multi-photon (MP) microscope to improve the quality of images of thick samples. Through a hill-climbing procedure the AL corrected for the specimen-induced aberrations enhancing MP images. The final images hardly differed when two different metrics were used, although the sets of Zernike coefficients were not identical. The optimized MP images acquired with the AL were also compared with those obtained with a liquid-crystal-on-silicon spatial light modulator. Results have shown that both devices lead to similar images, which corroborates the usefulness of this AL for MP imaging.

  5. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    PubMed

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  6. An exposure indicator for digital radiography: AAPM Task Group 116 (executive summary).

    PubMed

    Shepard, S Jeff; Wang, Jihong; Flynn, Michael; Gingold, Eric; Goldman, Lee; Krugh, Kerry; Leong, David L; Mah, Eugene; Ogden, Kent; Peck, Donald; Samei, Ehsan; Wang, Jihong; Willis, Charles E

    2009-07-01

    Digital radiographic imaging systems, such as those using photostimulable storage phosphor, amorphous selenium, amorphous silicon, CCD, and MOSFET technology, can produce adequate image quality over a much broader range of exposure levels than that of screen/film imaging systems. In screen/film imaging, the final image brightness and contrast are indicative of over- and underexposure. In digital imaging, brightness and contrast are often determined entirely by digital postprocessing of the acquired image data. Overexposure and underexposures are not readily recognizable. As a result, patient dose has a tendency to gradually increase over time after a department converts from screen/film-based imaging to digital radiographic imaging. The purpose of this report is to recommend a standard indicator which reflects the radiation exposure that is incident on a detector after every exposure event and that reflects the noise levels present in the image data. The intent is to facilitate the production of consistent, high quality digital radiographic images at acceptable patient doses. This should be based not on image optical density or brightness but on feedback regarding the detector exposure provided and actively monitored by the imaging system. A standard beam calibration condition is recommended that is based on RQA5 but uses filtration materials that are commonly available and simple to use. Recommendations on clinical implementation of the indices to control image quality and patient dose are derived from historical tolerance limits and presented as guidelines.

  7. An exposure indicator for digital radiography: AAPM Task Group 116 (Executive Summary)

    PubMed Central

    Shepard, S. Jeff; Wang, Jihong; Flynn, Michael; Gingold, Eric; Goldman, Lee; Krugh, Kerry; Leong, David L.; Mah, Eugene; Ogden, Kent; Peck, Donald; Samei, Ehsan; Wang, Jihong; Willis, Charles E.

    2009-01-01

    Digital radiographic imaging systems, such as those using photostimulable storage phosphor, amorphous selenium, amorphous silicon, CCD, and MOSFET technology, can produce adequate image quality over a much broader range of exposure levels than that of screen/film imaging systems. In screen/film imaging, the final image brightness and contrast are indicative of over- and underexposure. In digital imaging, brightness and contrast are often determined entirely by digital postprocessing of the acquired image data. Overexposure and underexposures are not readily recognizable. As a result, patient dose has a tendency to gradually increase over time after a department converts from screen/film-based imaging to digital radiographic imaging. The purpose of this report is to recommend a standard indicator which reflects the radiation exposure that is incident on a detector after every exposure event and that reflects the noise levels present in the image data. The intent is to facilitate the production of consistent, high quality digital radiographic images at acceptable patient doses. This should be based not on image optical density or brightness but on feedback regarding the detector exposure provided and actively monitored by the imaging system. A standard beam calibration condition is recommended that is based on RQA5 but uses filtration materials that are commonly available and simple to use. Recommendations on clinical implementation of the indices to control image quality and patient dose are derived from historical tolerance limits and presented as guidelines. PMID:19673189

  8. Initial products of Akatsuki 1-μm camera

    NASA Astrophysics Data System (ADS)

    Iwagami, Naomoto; Sakanoi, Takeshi; Hashimoto, George L.; Sawai, Kenta; Ohtsuki, Shoko; Takagi, Seiko; Uemizu, Kazunori; Ueno, Munetaka; Kameda, Shingo; Murakami, Shin-ya; Nakamura, Masato; Ishii, Nobuaki; Abe, Takumi; Satoh, Takehiko; Imamura, Takeshi; Hirose, Chikako; Suzuki, Makoto; Hirata, Naru; Yamazaki, Atsushi; Sato, Takao M.; Yamada, Manabu; Yamamoto, Yukio; Fukuhara, Tetsuya; Ogohara, Kazunori; Ando, Hiroki; Sugiyama, Ko-ichiro; Kashimura, Hiroki; Kouyama, Toru

    2018-01-01

    The status and initial products of the 1-μm camera onboard the Akatsuki mission to Venus are presented. After the successful retrial of Venus' orbit insertion on Dec. 2015 (5 years after the failure in Dec. 2010), and after a long cruise under intense radiation, damage in the detector seems small and fortunately insignificant in the final quality of the images. More than 600 dayside images have been obtained since the beginning of regular operation on Apr. 2016 although nightside images are less numerous (about 150 in total at three wavelengths) due to the light scattered from the bright dayside. However, data acquisition stopped after December 07, 2016, due to malfunction of the electronics and has not been resumed since then. The 0.90-µm dayside images are of sufficient quality for the cloud-tracking procedure to retrieve wind field in the cloud region. The results appear to be similar to those reported by previous 1-μm imaging by Galileo and Venus Express. The representative altitude sampled for such dayside images is estimated to be 51-55 km. Also, the quality of the nightside 1.01-µm images is sufficient for a search for active volcanism, since interference due to cloud inhomogeneity appears to be insignificant. The quality of the 0.97-µm images may be insufficient to achieve the expected spatial resolution for the near-surface H2O mixing ratio retrievals.[Figure not available: see fulltext.

  9. Figure of merit for macrouniformity based on image quality ruler evaluation and machine learning framework

    NASA Astrophysics Data System (ADS)

    Wang, Weibao; Overall, Gary; Riggs, Travis; Silveston-Keith, Rebecca; Whitney, Julie; Chiu, George; Allebach, Jan P.

    2013-01-01

    Assessment of macro-uniformity is a capability that is important for the development and manufacture of printer products. Our goal is to develop a metric that will predict macro-uniformity, as judged by human subjects, by scanning and analyzing printed pages. We consider two different machine learning frameworks for the metric: linear regression and the support vector machine. We have implemented the image quality ruler, based on the recommendations of the INCITS W1.1 macro-uniformity team. Using 12 subjects at Purdue University and 20 subjects at Lexmark, evenly balanced with respect to gender, we conducted subjective evaluations with a set of 35 uniform b/w prints from seven different printers with five levels of tint coverage. Our results suggest that the image quality ruler method provides a reliable means to assess macro-uniformity. We then defined and implemented separate features to measure graininess, mottle, large area variation, jitter, and large-scale non-uniformity. The algorithms that we used are largely based on ISO image quality standards. Finally, we used these features computed for a set of test pages and the subjects' image quality ruler assessments of these pages to train the two different predictors - one based on linear regression and the other based on the support vector machine (SVM). Using five-fold cross-validation, we confirmed the efficacy of our predictor.

  10. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  11. Fast and robust wavelet-based dynamic range compression and contrast enhancement model with color restoration

    NASA Astrophysics Data System (ADS)

    Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur

    2009-05-01

    Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.

  12. Monolayer-crystal streptavidin support films provide an internal standard of cryo-EM image quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Bong-Gyoon; Watson, Zoe; Cate, Jamie H. D.

    Analysis of images of biotinylated Escherichia coli 70S ribosome particles, bound to streptavidin affinity grids, demonstrates that the image-quality of particles can be predicted by the image-quality of the monolayer crystalline support film. Also, the quality of the Thon rings is a good predictor of the image-quality of particles, but only when images of the streptavidin crystals extend to relatively high resolution. When the estimated resolution of streptavidin was 5 Å or worse, for example, the ribosomal density map obtained from 22,697 particles went to only 9.5 Å, while the resolution of the map reached 4.0 Å for the samemore » number of particles, when the estimated resolution of streptavidin crystal was 4 Å or better. It thus is easy to tell which images in a data set ought to be retained for further work, based on the highest resolution seen for Bragg peaks in the computed Fourier transforms of the streptavidin component. The refined density map obtained from 57,826 particles obtained in this way extended to 3.6 Å, a marked improvement over the value of 3.9 Å obtained previously from a subset of 52,433 particles obtained from the same initial data set of 101,213 particles after 3-D classification. These results are consistent with the hypothesis that interaction with the air-water interface can damage particles when the sample becomes too thin. Finally, streptavidin monolayer crystals appear to provide a good indication of when that is the case.« less

  13. WE-G-209-00: Identifying Image Artifacts, Their Causes, and How to Fix Them

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  14. Online image classification under monotonic decision boundary constraint

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Allebach, Jan; Wagner, Jerry; Pitta, Brandi; Larson, David; Guo, Yandong

    2015-01-01

    Image classification is a prerequisite for copy quality enhancement in all-in-one (AIO) device that comprises a printer and scanner, and which can be used to scan, copy and print. Different processing pipelines are provided in an AIO printer. Each of the processing pipelines is designed specifically for one type of input image to achieve the optimal output image quality. A typical approach to this problem is to apply Support Vector Machine to classify the input image and feed it to its corresponding processing pipeline. The online training SVM can help users to improve the performance of classification as input images accumulate. At the same time, we want to make quick decision on the input image to speed up the classification which means sometimes the AIO device does not need to scan the entire image to make a final decision. These two constraints, online SVM and quick decision, raise questions regarding: 1) what features are suitable for classification; 2) how we should control the decision boundary in online SVM training. This paper will discuss the compatibility of online SVM and quick decision capability.

  15. From SPOT 5 to Pleiades HR: evolution of the instrumental specifications

    NASA Astrophysics Data System (ADS)

    Rosak, A.; Latry, C.; Pascal, V.; Laubier, D.

    2017-11-01

    Image quality specifications should aimed to fulfil high resolution mission requirements of remote sensing satellites with a minimum cost. The most important trade-off to be taken into account is between Modulation Transfer Function, radiometric noise and sampling scheme. This compromise is the main driver during design optimisation and requirement definition in order to achieve good performances and to minimise the mission cost. For the SPOT 5 satellite, a new compromise had been chosen. The supermode principle of imagery (sampling at 2.5 meter with a pixel size of 5 meter) imp roves the resolution by a factor of four compared with the SPOT 4 satellite (10 meter resolution). This paper presents the image quality specifications of the HRG-SPOT 5 instrument. We introduce all the efforts made on the instrument to achieve good image quality and low radiometric noise, then we compare the results with the SPOT 4 instrument's performances to highlight the improvements achieved. Then, the in-orbit performance will be described. Finally, we will present the new goals of image quality specifications for the new Pleiades-HR satellite for earth observation (0.7 meter resolution) and the instrument concept.

  16. Novel Approaches to Improve Iris Recognition System Performance Based on Local Quality Evaluation and Feature Fusion

    PubMed Central

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243

  17. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.

  18. Dream Images and Creation.

    PubMed

    Masson, Céline; Schauder, Silke; Sausse, Simone Korff

    2017-02-01

    This article links contemporary psychoanalytic theories of the dream, especially Bion's, with the work of the American video artist Bill Viola, who is deeply influenced by altered states of consciousness and produces images of dreamlike quality. We discuss the oneiric and infantile roots of creativity and artistic inspiration, finally taking Viola's monumental artwork The Passing (1991) as paradigmatic of the artist's aesthetic and philosophical elaboration of the relationship between life and death.

  19. [Impact of exposure dose reduction of radiation treatment planning CT using low tube voltage technique].

    PubMed

    Kouno, Takuya; Kuga, Noriyuki; Enzaki, Masahiro; Yamashita, Yuuki; Kitazato, Yumiko; Shimotabira, Haruhiko; Jinnouchi, Takashi; Kusuhara, Kazuo; Kawamura, Shinji

    2015-04-01

    The aim of this study was to reduce the exposed dose of radiotherapy treatment planning computed tomography (CT) by using low tube voltage technique. We used tube voltages of 80 kV, 100 kV, and 120 kV, respectively. First, we evaluated exposure dose with CT dose index (CTDI) for each voltage. Second, we compared image quality indexes such as modulation transfer function (MTF), noise power spectrum (NPS), and contrast to noise ratio (CNR) of phantom images with each voltage. Third, CT to electron density tables were measured in three voltages and monitor unit value was calculated along with clinical cases. Finally, CT surface exposed dose of chest skin was measured by thermoluminescent dosimeter (TLD). In image evaluation MTF and NPS were approximately equal; CNR slightly decreased, 2.0% for 100 kV. We performed check radiation dose accuracy for each tube voltage with each model phantom. As a result, the difference of MU value was not accepted. Finally, compared with 120 kV, CTDIvol and TLD value showed markedly decreased radiation dose, 60% for 80 kV and 30% for 100 kV. Using a technique with low tube voltages, especially 100 kV, is useful in radiotherapy treatment planning to obtain 20% dose reduction without compromising 120 kV image quality.

  20. Automated assembly of camera modules using active alignment with up to six degrees of freedom

    NASA Astrophysics Data System (ADS)

    Bräuniger, K.; Stickler, D.; Winters, D.; Volmer, C.; Jahn, M.; Krey, S.

    2014-03-01

    With the upcoming Ultra High Definition (UHD) cameras, the accurate alignment of optical systems with respect to the UHD image sensor becomes increasingly important. Even with a perfect objective lens, the image quality will deteriorate when it is poorly aligned to the sensor. For evaluating the imaging quality the Modulation Transfer Function (MTF) is used as the most accepted test. In the first part it is described how the alignment errors that lead to a low imaging quality can be measured. Collimators with crosshair at defined field positions or a test chart are used as object generators for infinite-finite or respectively finite-finite conjugation. The process how to align the image sensor accurately to the optical system will be described. The focus position, shift, tilt and rotation of the image sensor are automatically corrected to obtain an optimized MTF for all field positions including the center. The software algorithm to grab images, calculate the MTF and adjust the image sensor in six degrees of freedom within less than 30 seconds per UHD camera module is described. The resulting accuracy of the image sensor rotation is better than 2 arcmin and the accuracy position alignment in x,y,z is better 2 μm. Finally, the process of gluing and UV-curing is described and how it is managed in the integrated process.

  1. Fast subsurface fingerprint imaging with full-field optical coherence tomography system equipped with a silicon camera.

    PubMed

    Auksorius, Egidijus; Boccara, A Claude

    2017-09-01

    Images recorded below the surface of a finger can have more details and be of higher quality than the conventional surface fingerprint images. This is particularly true when the quality of the surface fingerprints is compromised by, for example, moisture or surface damage. However, there is an unmet need for an inexpensive fingerprint sensor that is able to acquire high-quality images deep below the surface in short time. To this end, we report on a cost-effective full-field optical coherent tomography system comprised of a silicon camera and a powerful near-infrared LED light source. The system, for example, is able to record 1.7  cm×1.7  cmen face images in 0.12 s with the spatial sampling rate of 2116 dots per inch and the sensitivity of 93 dB. We show that the system can be used to image internal fingerprints and sweat ducts with good contrast. Finally, to demonstrate its biometric performance, we acquired subsurface fingerprint images from 240 individual fingers and estimated the equal-error-rate to be ∼0.8%. The developed instrument could also be used in other en face deep-tissue imaging applications because of its high sensitivity, such as in vivo skin imaging. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  2. Role of Negative Trans-Thoracic Echocardiography in the Diagnosis of Infective Endocarditis.

    PubMed

    Leitman, Marina; Peleg, Eli; Shmueli, Ruthie; Vered, Zvi

    2016-07-01

    The search for the presence of vegetations in patients with suspected infective endocarditis is a major indication for trans-esophageal echocardiographic (TEE) examinations. Advances in harmonic imaging and ongoing improvement in modern echocardiographic systems allow adequate quality of diagnostic images in most patients. To investigate whether TEE examinations are always necessary for the assessment of patients with suspected infective endocarditis. During 2012-2014 230 trans-thoracic echo (TTE) exams in patients with suspected infective endocarditis were performed at our center. Demographic, epidemiological, clinical and echocardiographic data were collected and analyzed, and the final clinical diagnosis and outcome were determined. Of 230 patients, 24 had definite infective endocarditis by clinical assessment. TEE examination was undertaken in 76 of the 230 patients based on the clinical decision of the attending physician. All TTE exams were classified as: (i) positive, i.e., vegetations present; (ii) clearly negative; or (iii) non-conclusive. Of the 92 with clearly negative TTE exams, 20 underwent TEE and all were negative. All clearly negative patients had native valves, adequate quality images, and in all 92 the final diagnosis was not infective endocarditis. Thus, the negative predictive value of a clearly negative TTE examination was 100%. In patients with native cardiac valves referred for evaluation for infective endocarditis, an adequate quality TTE with clearly negative examination may be sufficient for the diagnosis.

  3. Performance prediction of optical image stabilizer using SVM for shaker-free production line

    NASA Astrophysics Data System (ADS)

    Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo

    2016-04-01

    Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.

  4. Automatic assessment of the quality of patient positioning in mammography

    NASA Astrophysics Data System (ADS)

    Bülow, Thomas; Meetz, Kirsten; Kutra, Dominik; Netsch, Thomas; Wiemker, Rafael; Bergtholdt, Martin; Sabczynski, Jörg; Wieberneit, Nataly; Freund, Manuela; Schulze-Wenck, Ingrid

    2013-02-01

    Quality assurance has been recognized as crucial for the success of population-based breast cancer screening programs using x-ray mammography. Quality guidelines and criteria have been defined in the US as well as the European Union in order to ensure the quality of breast cancer screening. Taplin et al. report that incorrect positioning of the breast is the major image quality issue in screening mammography. Consequently, guidelines and criteria for correct positioning and for the assessment of the positioning quality in mammograms play an important role in the quality standards. In this paper we present a system for the automatic evaluation of positioning quality in mammography according to the existing standardized criteria. This involves the automatic detection of anatomic landmarks in medio- lateral oblique (MLO) and cranio-caudal (CC) mammograms, namely the pectoral muscle, the mammilla and the infra-mammary fold. Furthermore, the detected landmarks are assessed with respect to their proper presentation in the image. Finally, the geometric relations between the detected landmarks are investigated to assess the positioning quality. This includes the evaluation whether the pectoral muscle is imaged down to the mammilla level, and whether the posterior nipple line diameter of the breast is consistent between the different views (MLO and CC) of the same breast. Results of the computerized assessment are compared to ground truth collected from two expert readers.

  5. Landsat Image Map Production Methods at the U. S. Geological Survey

    USGS Publications Warehouse

    Kidwell, R.D.; Binnie, D.R.; Martin, S.

    1987-01-01

    To maintain consistently high quality in satellite image map production, the U. S. Geological Survey (USGS) has developed standard procedures for the photographic and digital production of Landsat image mosaics, and for lithographic printing of multispectral imagery. This paper gives a brief review of the photographic, digital, and lithographic procedures currently in use for producing image maps from Landsat data. It is shown that consistency in the printing of image maps is achieved by standardizing the materials and procedures that affect the image detail and color balance of the final product. Densitometric standards are established by printing control targets using the pressplates, inks, pre-press proofs, and paper to be used for printing.

  6. SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.

    PubMed

    Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou

    2015-11-01

    In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.

  7. Focus measure method based on the modulus of the gradient of the color planes for digital microscopy

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel

    2018-02-01

    The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

  8. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  9. Robustness of speckle imaging techniques applied to horizontal imaging scenarios

    NASA Astrophysics Data System (ADS)

    Bos, Jeremy P.

    Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction to improve the quality of imagery available to operators. To be effective, these systems must operate over significant variations in turbulence conditions while also subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition to robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods are one of a variety of methods recently been proposed for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. This performance evaluation is made possible using a novel technique for simulating anisoplanatic image formation. I find that incorporate as few as 15 image frames and 4 estimates of the object phase per reconstructed frame provide an average reduction of 45% reduction in Mean Squared Error (MSE) and 68% reduction in deviation in MSE. In addition, the Knox-Thompson phase recovery method is demonstrated to produce images in half the time required by the bispectrum. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate reconstruction quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.

  10. WE-G-209-01: Digital Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schueler, B.

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  11. Development and characterization of a scintillating cell imaging dish for radioluminescence microscopy.

    PubMed

    Sengupta, Debanti; Kim, Tae Jin; Almasi, Sepideh; Miller, Stuart; Marton, Zsolt; Nagarkar, Vivek; Pratx, Guillem

    2018-04-16

    Radioluminescence microscopy is an emerging modality that can be used to image radionuclide probes with micron-scale resolution. This technique is particularly useful as a way to probe the metabolic behavior of single cells and to screen and characterize radiopharmaceuticals, but the quality of the images is critically dependent on the scintillator material used to image the cells. In this paper, we detail the development of a microscopy dish made of a thin-film scintillating material, Lu2O3:Eu, that could be used as the blueprint for a future consumable product. After developing a simple quality control method based on long-lived alpha and beta sources, we characterize the radioluminescence properties of various thin-film scintillator samples. We find consistent performance for most samples, but also identify a few samples that do not meet the specifications, thus stressing the need for routine quality control prior to biological experiments. In addition, we test and quantify the transparency of the material, and demonstrate that transparency correlates with thickness. Finally, we evaluate the biocompatibility of the material and show that the microscopy dish can produce radioluminescent images of live single cells.

  12. Fast imaging of live organisms with sculpted light sheets

    NASA Astrophysics Data System (ADS)

    Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.

    2015-04-01

    Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed.

  13. Chemistry of the Konica Dry Color System

    NASA Astrophysics Data System (ADS)

    Suda, Yoshihiko; Ohbayashi, Keiji; Onodera, Kaoru

    1991-08-01

    While silver halide photosensitive materials offer superiority in image quality -- both in color and black-and-white -- they require chemical solutions for processing, and this can be a drawback. To overcome this, researchers turned to the thermal development of silver halide photographic materials, and met their first success with black-and-white images. Later, with the development of the Konica Dry Color System, color images were finally obtained from a completely dry thermal development system, without the use of water or chemical solutions. The dry color system is characterized by a novel chromogenic color image-forming technology and comprises four processes. (1) With the application of heat, a color developer precursor (CDP) decomposes to generate a p-phenylenediamine color developer (CD). (2) The CD then develops silver salts. (3) Oxidized CD then reacts with couplers to generate color image dyes. (4) Finally, the dyes diffuse from the system's photosensitive sheet to its image-receiving sheet. The authors have analyzed the kinetics of each of the system's four processes. In this paper, they report the kinetics of the system's first process, color developer (CD) generation.

  14. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  15. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  16. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  17. TU-EF-204-02: Hiigh Quality and Sub-MSv Cerebral CT Perfusion Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ke; Niu, Kai; Wu, Yijing

    2015-06-15

    Purpose: CT Perfusion (CTP) imaging is of great importance in acute ischemic stroke management due to its potential to detect hypoperfused yet salvageable tissue and distinguish it from definitely unsalvageable tissue. However, current CTP imaging suffers from poor image quality and high radiation dose (up to 5 mSv). The purpose of this work was to demonstrate that technical innovations such as Prior Image Constrained Compressed Sensing (PICCS) have the potential to address these challenges and achieve high quality and sub-mSv CTP imaging. Methods: (1) A spatial-temporal 4D cascaded system model was developed to indentify the bottlenecks in the current CTPmore » technology; (2) A task-based framework was developed to optimize the CTP system parameters; (3) Guided by (1) and (2), PICCS was customized for the reconstruction of CTP source images. Digital anthropomorphic perfusion phantoms, animal studies, and preliminary human subject studies were used to validate and evaluate the potentials of using these innovations to advance the CTP technology. Results: The 4D cascaded model was validated in both phantom and canine stroke models. Based upon this cascaded model, it has been discovered that, as long as the spatial resolution and noise properties of the 4D source CT images are given, the 3D MTF and NPS of the final CTP maps can be analytically derived for a given set of processing methods and parameters. The cascaded model analysis also identified that the most critical technical factor in CTP is how to acquire and reconstruct high quality source images; it has very little to do with the denoising techniques often used after parametric perfusion calculations. This explained why PICCS resulted in a five-fold dose reduction or substantial improvement in image quality. Conclusion: Technical innovations generated promising results towards achieving high quality and sub-mSv CTP imaging for reliable and safe assessment of acute ischemic strokes. K. Li, K. Niu, Y. Wu: Nothing to disclose. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less

  18. Application of the Critical Success Factor Methodology to DoD Organization.

    DTIC Science & Technology

    1984-09-01

    high technology manufacturing, banking, airline, insurance, railway, and automobile . Sullen (6t22-25) lists the current CSFs of the 14 S automobile ...industry as image, quality dealer system, cost control, and meting energy standards. However, in 1981 the automobile CSFs included only styling, quality...bearing on current car purchases as well as future car buys. And finally cost control influenced the auto industry as a CSF, since profit per automobile had

  19. Pre-processing, registration and selection of adaptive optics corrected retinal images.

    PubMed

    Ramaswamy, Gomathy; Devaney, Nicholas

    2013-07-01

    In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased sharpness over most of the field of view. Adaptive optics assisted images of the cone photoreceptors can be better pre-processed using a wavelet approach. These images can be assessed for image quality using a 'Designer Metric'. Two-stage image registration including correcting for rotation significantly improves the final image contrast and sharpness. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  20. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-01

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  1. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction.

    PubMed

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-21

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  2. Evaluation of non-rigid constrained CT/CBCT registration algorithms for delineation propagation in the context of prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Rubeaux, Mathieu; Simon, Antoine; Gnep, Khemara; Colliaux, Jérémy; Acosta, Oscar; de Crevoisier, Renaud; Haigron, Pascal

    2013-03-01

    Image-Guided Radiation Therapy (IGRT) aims at increasing the precision of radiation dose delivery. In the context of prostate cancer, a planning Computed Tomography (CT) image with manually defined prostate and organs at risk (OAR) delineations is usually associated with daily Cone Beam Computed Tomography (CBCT) follow-up images. The CBCT images allow to visualize the prostate position and to reposition the patient accordingly. They also should be used to evaluate the dose received by the organs at each fraction of the treatment. To do so, the first step is a prostate and OAR segmentation on the daily CBCTs, which is very timeconsuming. To simplify this task, CT to CBCT non-rigid registration could be used in order to propagate the original CT delineations in the CBCT images. For this aim, we compared several non-rigid registration methods. They are all based on the Mutual Information (MI) similarity measure, and use a BSpline transformation model. But we add different constraints to this global scheme in order to evaluate their impact on the final results. These algorithms are investigated on two real datasets, representing a total of 70 CBCT on which a reference delineation has been realized. The evaluation is led using the Dice Similarity Coefficient (DSC) as a quality criteria. The experiments show that a rigid penalty term on the bones improves the final registration result, providing high quality propagated delineations.

  3. Seamless Image Mosaicking via Synchronization

    NASA Astrophysics Data System (ADS)

    Santellani, E.; Maset, E.; Fusiello, A.

    2018-05-01

    This paper proposes an innovative method to create high-quality seamless planar mosaics. The developed pipeline ensures good robustness against many common mosaicking problems (e.g., misalignments, colour distortion, moving objects, parallax) and differs from other works in the literature because a global approach, known as synchronization, is used for image registration and colour correction. To better conceal the mosaic seamlines, images are cut along specific paths, computed using a Voronoi decomposition of the mosaic area and a shortest path algorithm. Results obtained on challenging real datasets show that the colour correction mitigates significantly the colour variations between the original images and the seams on the final mosaic are not evident.

  4. Performance evaluation of a two detector camera for real-time video.

    PubMed

    Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo

    2016-12-20

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.

  5. Ghost detection and removal based on super-pixel grouping in exposure fusion

    NASA Astrophysics Data System (ADS)

    Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun

    2014-09-01

    A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.

  6. Concentric Rings K-Space Trajectory for Hyperpolarized 13C MR Spectroscopic Imaging

    PubMed Central

    Jiang, Wenwen; Lustig, Michael; Larson, Peder E.Z.

    2014-01-01

    Purpose To develop a robust and rapid imaging technique for hyperpolarized 13C MR Spectroscopic Imaging (MRSI) and investigate its performance. Methods A concentric rings readout trajectory with constant angular velocity is proposed for hyperpolarized 13C spectroscopic imaging and its properties are analyzed. Quantitative analyses of design tradeoffs are presented for several imaging scenarios. The first application of concentric rings on 13C phantoms and in vivo animal hyperpolarized 13C MRSI studies were performed to demonstrate the feasibility of the proposed method. Finally, a parallel imaging accelerated concentric rings study is presented. Results The concentric rings MRSI trajectory has the advantages of acquisition timesaving compared to echo-planar spectroscopic imaging (EPSI). It provides sufficient spectral bandwidth with relatively high SNR efficiency compared to EPSI and spiral techniques. Phantom and in vivo animal studies showed good image quality with half the scan time and reduced pulsatile flow artifacts compared to EPSI. Parallel imaging accelerated concentric rings showed advantages over Cartesian sampling in g-factor simulations and demonstrated aliasing-free image quality in a hyperpolarized 13C in vivo study. Conclusion The concentric rings trajectory is a robust and rapid imaging technique that fits very well with the speed, bandwidth, and resolution requirements of hyperpolarized 13C MRSI. PMID:25533653

  7. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.

    PubMed

    Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H

    2014-03-17

    We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  9. Noise characteristics of CT perfusion imaging: how does noise propagate from source images to final perfusion maps?

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2016-03-01

    Cerebral CT perfusion (CTP) imaging is playing an important role in the diagnosis and treatment of acute ischemic strokes. Meanwhile, the reliability of CTP-based ischemic lesion detection has been challenged due to the noisy appearance and low signal-to-noise ratio of CTP maps. To reduce noise and improve image quality, a rigorous study on the noise transfer properties of CTP systems is highly desirable to provide the needed scientific guidance. This paper concerns how noise in the CTP source images propagates to the final CTP maps. Both theoretical deviations and subsequent validation experiments demonstrated that, the noise level of background frames plays a dominant role in the noise of the cerebral blood volume (CBV) maps. This is in direct contradiction with the general belief that noise of non-background image frames is of greater importance in CTP imaging. The study found that when radiation doses delivered to the background frames and to all non-background frames are equal, lowest noise variance is achieved in the final CBV maps. This novel equality condition provides a practical means to optimize radiation dose delivery in CTP data acquisition: radiation exposures should be modulated between background frames and non-background frames so that the above equality condition is satisïnAed. For several typical CTP acquisition protocols, numerical simulations and in vivo canine experiment demonstrated that noise of CBV can be effectively reduced using the proposed exposure modulation method.

  10. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-04-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  11. Aspheric glass lens modeling and machining

    NASA Astrophysics Data System (ADS)

    Johnson, R. Barry; Mandina, Michael

    2005-08-01

    The incorporation of aspheric lenses in complex lens system can provide significant image quality improvement, reduction of the number of lens elements, smaller size, and lower weight. Recently, it has become practical to manufacture aspheric glass lenses using diamond-grinding methods. The evolution of the manufacturing technology is discussed for a specific aspheric glass lens. When a prototype all-glass lens system (80 mm efl, F/2.5) was fabricated and tested, it was observed that the image quality was significantly less than was predicted by the optical design software. The cause of the degradation was identified as the large aspheric element in the lens. Identification was possible by precision mapping of the spatial coordinates of the lens surface and then transforming this data into an appropriate optical surface defined by derived grid sag data. The resulting optical analysis yielded a modeled image consistent with that observed when testing the prototype lens system in the laboratory. This insight into a localized slope-error problem allowed improvements in the fabrication process to be implemented. The second fabrication attempt, the resulting aspheric lens provided remarkable improvement in the observed image quality, although still falling somewhat short of the desired image quality goal. In parallel with the fabrication enhancement effort, optical modeling of the surface was undertaken to determine how much surface error and error types were allowable to achieve the desired image quality goal. With this knowledge, final improvements were made to the fabrication process. The third prototype lens achieved the goal of optical performance. Rapid development of the aspheric glass lens was made possible by the interactive relationship between the optical designer, diamond-grinding personnel, and the metrology personnel. With rare exceptions, the subsequent production lenses were optical acceptable and afforded reasonable manufacturing costs.

  12. DEEP U BAND AND R IMAGING OF GOODS-SOUTH: OBSERVATIONS, DATA REDUCTION AND FIRST RESULTS ,

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nonino, M.; Cristiani, S.; Vanzella, E.

    2009-08-01

    We present deep imaging in the U band covering an area of 630 arcmin{sup 2} centered on the southern field of the Great Observatories Origins Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the European Southern Observatory (ESO) Very Large Telescope. The final images reach a magnitude limit U {sub lim} {approx} 29.8 (AB, 1{sigma}, in a 1'' radius aperture), and have good image quality, with full width at half-maximum {approx}0.''8. They are significantly deeper than previous U-band images available for the GOODS fields, and better match the sensitivity of other multiwavelength GOODS photometry. The deepermore » U-band data yield significantly improved photometric redshifts, especially in key redshift ranges such as 2 < z < 4, and deeper color-selected galaxy samples, e.g., Lyman break galaxies at z {approx} 3. We also present the co-addition of archival ESO VIMOS R-band data, with R {sub lim} {approx} 29 (AB, 1{sigma}, 1'' radius aperture), and image quality {approx}0.''75. We discuss the strategies for the observations and data reduction, and present the first results from the analysis of the co-added images.« less

  13. Image analysis for maintenance of coating quality in nickel electroplating baths--real time control.

    PubMed

    Vidal, M; Amigo, J M; Bro, R; van den Berg, F; Ostra, M; Ubide, C

    2011-11-07

    The aim of this paper is to show how it is possible to extract analytical information from images acquired with a flatbed scanner and make use of this information for real time control of a nickel plating process. Digital images of plated steel sheets in a nickel bath are used to follow the process under degradation of specific additives. Dedicated software has been developed for making the obtained results accessible to process operators. This includes obtaining the RGB image, to select the red channel data exclusively, to calculate the histogram of the red channel data and to calculate the mean colour value (MCV) and the standard deviation of the red channel data. MCV is then used by the software to determine the concentration of the additives Supreme Plus Brightner (SPB) and SA-1 (for confidentiality reasons, the chemical contents cannot be further detailed) present in the bath (these two additives degrade and their concentration changes during the process). Finally, the software informs the operator when the bath is generating unsuitable quality plating and suggests the amount of SPB and SA-1 to be added in order to recover the original plating quality. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Improved telescope focus using only two focus images

    NASA Astrophysics Data System (ADS)

    Barrick, Gregory; Vermeulen, Tom; Thomas, James

    2008-07-01

    In an effort to reduce the amount of time spent focusing the telescope and to improve the quality of the focus, a new procedure has been investigated and implemented at the Canada-France-Hawaii Telescope (CFHT). The new procedure is based on a paper by Tokovinin and Heathcote and requires only two out-of-focus images to determine the best focus for the telescope. Using only two images provides a great time savings over the five or more images required for a standard through-focus sequence. In addition, it has been found that this method is significantly less sensitive to seeing variations than the traditional through-focus procedure, so the quality of the resulting focus is better. Finally, the new procedure relies on a second moment calculation and so is computationally easier and more robust than methods using a FWHM calculation. The new method has been implemented for WIRCam for the past 18 months, for MegaPrime for the past year, and has recently been implemented for ESPaDOnS.

  15. Next-generation Event Horizon Telescope developments: new stations for enhanced imaging

    NASA Astrophysics Data System (ADS)

    Palumbo, Daniel; Johnson, Michael; Doeleman, Sheperd; Chael, Andrew; Bouman, Katherine

    2018-01-01

    The Event Horizon Telescope (EHT) is a multinational Very Long Baseline Interferometry (VLBI) network of dishes joined to resolve general relativistic behavior near a supermassive black hole. The imaging quality of the EHT is largely dependent upon the sensitivity and spatial frequency coverage of the many baselines between its constituent telescopes. The EHT already contains many highly sensitive dishes, including the crucial Atacama Large Millimeter/Submillimeter Array (ALMA), making it viable to add smaller, cheaper telescopes to the array, greatly improving future capabilities of the EHT. We develop tools for optimizing the positions of new dishes in planned arrays. We also explore the feasibility of adding small orbiting dishes to the EHT, and develop orbital optimization tools for space-based VLBI imaging. Unlike the Millimetron mission planned to be at L2, we specifically treat near-earth orbiters, and find rapid filling of spatial frequency coverage across a large range of baseline lengths. Finally, we demonstrate significant improvement in image quality when adding small dishes to planned arrays in simulated observations.

  16. Local motion-compensated method for high-quality 3D coronary artery reconstruction

    PubMed Central

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-01-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741

  17. Automatic initialization and quality control of large-scale cardiac MRI segmentations.

    PubMed

    Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F

    2018-01-01

    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Redundancy Analysis of Capacitance Data of a Coplanar Electrode Array for Fast and Stable Imaging Processing

    PubMed Central

    Wen, Yintang; Zhang, Zhenda; Zhang, Yuyan; Sun, Dongtao

    2017-01-01

    A coplanar electrode array sensor is established for the imaging of composite-material adhesive-layer defect detection. The sensor is based on the capacitive edge effect, which leads to capacitance data being considerably weak and susceptible to environmental noise. The inverse problem of coplanar array electrical capacitance tomography (C-ECT) is ill-conditioning, in which a small error of capacitance data can seriously affect the quality of reconstructed images. In order to achieve a stable image reconstruction process, a redundancy analysis method for capacitance data is proposed. The proposed method is based on contribution rate and anti-interference capability. According to the redundancy analysis, the capacitance data are divided into valid and invalid data. When the image is reconstructed by valid data, the sensitivity matrix needs to be changed accordingly. In order to evaluate the effectiveness of the sensitivity map, singular value decomposition (SVD) is used. Finally, the two-dimensional (2D) and three-dimensional (3D) images are reconstructed by the Tikhonov regularization method. Through comparison of the reconstructed images of raw capacitance data, the stability of the image reconstruction process can be improved, and the quality of reconstructed images is not degraded. As a result, much invalid data are not collected, and the data acquisition time can also be reduced. PMID:29295537

  19. Infrared and visible image fusion scheme based on NSCT and low-level visual features

    NASA Astrophysics Data System (ADS)

    Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei

    2016-05-01

    Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.

  20. Automatic Detection of Welding Defects using Deep Neural Network

    NASA Astrophysics Data System (ADS)

    Hou, Wenhui; Wei, Ye; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2018-01-01

    In this paper, we propose an automatic detection schema including three stages for weld defects in x-ray images. Firstly, the preprocessing procedure for the image is implemented to locate the weld region; Then a classification model which is trained and tested by the patches cropped from x-ray images is constructed based on deep neural network. And this model can learn the intrinsic feature of images without extra calculation; Finally, the sliding-window approach is utilized to detect the whole images based on the trained model. In order to evaluate the performance of the model, we carry out several experiments. The results demonstrate that the classification model we proposed is effective in the detection of welded joints quality.

  1. Hyperspectral imaging of water quality - past applications and future directions.

    NASA Astrophysics Data System (ADS)

    Ross, M. R. V.; Pavelsky, T.

    2017-12-01

    Inland waters control the delivery of sediment, carbon, and nutrients from land to ocean by transforming, depositing, and transporting constituents downstream. However, the dominant in situ conditions that control these processes are poorly constrained, especially at larger spatial scales. Hyperspectral imaging, a remote sensing technique that uses reflectance in hundreds of narrow spectral bands, can be used to estimate water quality parameters like sediment and carbon concentration over larger water bodies. Here, we review methods and applications for using hyperspectral imagery to generate near-surface two-dimensional models of water quality in lakes and rivers. Further, we show applications using newly available data from the National Ecological Observation Network aerial observation platform in the Black Warrior and Tombigbee Rivers, Alabama. We demonstrate large spatial variation in chlorophyll, colored dissolved organic matter, and turbidity in each river and uneven mixing of water quality constituents for several kilometers. Finally, we demonstrate some novel techniques using hyperspectral imagery to deconvolve dissolved organic matter spectral signatures to specific organic matter components.

  2. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  3. An approach to optimize sample preparation for MALDI imaging MS of FFPE sections using fractional factorial design of experiments.

    PubMed

    Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter

    2016-09-01

    A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended.

  4. Evaluation and testing of image quality of the Space Solar Extreme Ultraviolet Telescope

    NASA Astrophysics Data System (ADS)

    Peng, Jilong; Yi, Zhong; Zhou, Shuhong; Yu, Qian; Hou, Yinlong; Wang, Shanshan

    2018-01-01

    For the space solar extreme ultraviolet telescope, the star point test can not be performed in the x-ray band (19.5nm band) as there is not light source of bright enough. In this paper, the point spread function of the optical system is calculated to evaluate the imaging performance of the telescope system. Combined with the actual processing surface error, such as small grinding head processing and magnetorheological processing, the optical design software Zemax and data analysis software Matlab are used to directly calculate the system point spread function of the space solar extreme ultraviolet telescope. Matlab codes are programmed to generate the required surface error grid data. These surface error data is loaded to the specified surface of the telescope system by using the communication technique of DDE (Dynamic Data Exchange), which is used to connect Zemax and Matlab. As the different processing methods will lead to surface error with different size, distribution and spatial frequency, the impact of imaging is also different. Therefore, the characteristics of the surface error of different machining methods are studied. Combining with its position in the optical system and simulation its influence on the image quality, it is of great significance to reasonably choose the processing technology. Additionally, we have also analyzed the relationship between the surface error and the image quality evaluation. In order to ensure the final processing of the mirror to meet the requirements of the image quality, we should choose one or several methods to evaluate the surface error according to the different spatial frequency characteristics of the surface error.

  5. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images

    PubMed Central

    Tang, Yunwei; Jing, Linhai; Ding, Haifeng

    2017-01-01

    The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods. PMID:29064416

  6. Performance Analysis of Visible Light Communication Using CMOS Sensors.

    PubMed

    Do, Trong-Hop; Yoo, Myungsik

    2016-02-29

    This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis.

  7. Performance Analysis of Visible Light Communication Using CMOS Sensors

    PubMed Central

    Do, Trong-Hop; Yoo, Myungsik

    2016-01-01

    This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis. PMID:26938535

  8. [About professional education of future military physicians].

    PubMed

    Tegubov, V N

    2013-08-01

    One of the effective methods of professional education of future military physicians is art. Art as a form of artistic-image reflection of reality fully discloses the specifics of the activities of the military-medical specialists in times of peace and war. Variety of different kinds of art and availability of their use in the educational process promotes the final self-determination of students in their choice of profession, higher quality of its development and ensures the formation of their personal qualities as defenders of the homeland.

  9. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  10. The on-site quality-assurance system for Hyper Suprime-Cam: OSQAH

    NASA Astrophysics Data System (ADS)

    Furusawa, Hisanori; Koike, Michitaro; Takata, Tadafumi; Okura, Yuki; Miyatake, Hironao; Lupton, Robert H.; Bickerton, Steven; Price, Paul A.; Bosch, James; Yasuda, Naoki; Mineo, Sogo; Yamada, Yoshihiko; Miyazaki, Satoshi; Nakata, Fumiaki; Koshida, Shintaro; Komiyama, Yutaka; Utsumi, Yousuke; Kawanomoto, Satoshi; Jeschke, Eric; Noumaru, Junichi; Schubert, Kiaina; Iwata, Ikuru; Finet, Francois; Fujiyoshi, Takuya; Tajitsu, Akito; Terai, Tsuyoshi; Lee, Chien-Hsiu

    2018-01-01

    We have developed an automated quick data analysis system for data quality assurance (QA) for Hyper Suprime-Cam (HSC). The system was commissioned in 2012-2014, and has been offered for general observations, including the HSC Subaru Strategic Program, since 2014 March. The system provides observers with data quality information, such as seeing, sky background level, and sky transparency, based on quick analysis as data are acquired. Quick-look images and validation of image focus are also provided through an interactive web application. The system is responsible for the automatic extraction of QA information from acquired raw data into a database, to assist with observation planning, assess progress of all observing programs, and monitor long-term efficiency variations of the instrument and telescope. Enhancements of the system are being planned to facilitate final data analysis, to improve the HSC archive, and to provide legacy products for astronomical communities.

  11. Integrated editing system for Japanese text and image information "Linernote"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuto

    Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.

  12. Automatic quality assessment of apical four-chamber echocardiograms using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Abdi, Amir H.; Luong, Christina; Tsang, Teresa; Allan, Gregory; Nouranian, Saman; Jue, John; Hawley, Dale; Fleming, Sarah; Gin, Ken; Swift, Jody; Rohling, Robert; Abolmaesumi, Purang

    2017-02-01

    Echocardiography (echo) is the most common test for diagnosis and management of patients with cardiac condi- tions. While most medical imaging modalities benefit from a relatively automated procedure, this is not the case for echo and the quality of the final echo view depends on the competency and experience of the sonographer. It is not uncommon that the sonographer does not have adequate experience to adjust the transducer and acquire a high quality echo, which may further affect the clinical diagnosis. In this work, we aim to aid the operator during image acquisition by automatically assessing the quality of the echo and generating the Automatic Echo Score (AES). This quality assessment method is based on a deep convolutional neural network, trained in an end-to-end fashion on a large dataset of apical four-chamber (A4C) echo images. For this project, an expert car- diologist went through 2,904 A4C images obtained from independent studies and assessed their condition based on a 6-scale grading system. The scores assigned by the expert ranged from 0 to 5. The distribution of scores among the 6 levels were almost uniform. The network was then trained on 80% of the data (2,345 samples). The average absolute error of the trained model in calculating the AES was 0.8 +/- 0:72. The computation time of the GPU implementation of the neural network was estimated at 5 ms per frame, which is sufficient for real-time deployment.

  13. (LMRG): Microscope Resolution, Objective Quality, Spectral Accuracy and Spectral Un-mixing

    PubMed Central

    Bayles, Carol J.; Cole, Richard W.; Eason, Brady; Girard, Anne-Marie; Jinadasa, Tushare; Martin, Karen; McNamara, George; Opansky, Cynthia; Schulz, Katherine; Thibault, Marc; Brown, Claire M.

    2012-01-01

    The second study by the LMRG focuses on measuring confocal laser scanning microscope (CLSM) resolution, objective lens quality, spectral imaging accuracy and spectral un-mixing. Affordable test samples for each aspect of the study were designed, prepared and sent to 116 labs from 23 countries across the globe. Detailed protocols were designed for the three tests and customized for most of the major confocal instruments being used by the study participants. One protocol developed for measuring resolution and objective quality was recently published in Nature Protocols (Cole, R. W., T. Jinadasa, et al. (2011). Nature Protocols 6(12): 1929–1941). The first study involved 3D imaging of sub-resolution fluorescent microspheres to determine the microscope point spread function. Results of the resolution studies as well as point spread function quality (i.e. objective lens quality) from 140 different objective lenses will be presented. The second study of spectral accuracy looked at the reflection of the laser excitation lines into the spectral detection in order to determine the accuracy of these systems to report back the accurate laser emission wavelengths. Results will be presented from 42 different spectral confocal systems. Finally, samples with double orange beads (orange core and orange coating) were imaged spectrally and the imaging software was used to un-mix fluorescence signals from the two orange dyes. Results from 26 different confocal systems will be summarized. Time will be left to discuss possibilities for the next LMRG study.

  14. Building large mosaics of confocal edomicroscopic images using visual servoing.

    PubMed

    Rosa, Benoît; Erden, Mustafa Suphi; Vercauteren, Tom; Herman, Benoît; Szewczyk, Jérôme; Morel, Guillaume

    2013-04-01

    Probe-based confocal laser endomicroscopy provides real-time microscopic images of tissues contacted by a small probe that can be inserted in vivo through a minimally invasive access. Mosaicking consists in sweeping the probe in contact with a tissue to be imaged while collecting the video stream, and process the images to assemble them in a large mosaic. While most of the literature in this field has focused on image processing, little attention has been paid so far to the way the probe motion can be controlled. This is a crucial issue since the precision of the probe trajectory control drastically influences the quality of the final mosaic. Robotically controlled motion has the potential of providing enough precision to perform mosaicking. In this paper, we emphasize the difficulties of implementing such an approach. First, probe-tissue contacts generate deformations that prevent from properly controlling the image trajectory. Second, in the context of minimally invasive procedures targeted by our research, robotic devices are likely to exhibit limited quality of the distal probe motion control at the microscopic scale. To cope with these problems visual servoing from real-time endomicroscopic images is proposed in this paper. It is implemented on two different devices (a high-accuracy industrial robot and a prototype minimally invasive device). Experiments on different kinds of environments (printed paper and ex vivo tissues) show that the quality of the visually servoed probe motion is sufficient to build mosaics with minimal distortion in spite of disturbances.

  15. Drawing a baseline in aesthetic quality assessment

    NASA Astrophysics Data System (ADS)

    Rubio, Fernando; Flores, M. Julia; Puerta, Jose M.

    2018-04-01

    Aesthetic classification of images is an inherently subjective task. There does not exist a validated collection of images/photographs labeled as having good or bad quality from experts. Nowadays, the closest approximation to that is to use databases of photos where a group of users rate each image. Hence, there is not a unique good/bad label but a rating distribution given by users voting. Due to this peculiarity, it is not possible to state the problem of binary aesthetic supervised classification in such a direct mode as other Computer Vision tasks. Recent literature follows an approach where researchers utilize the average rates from the users for each image, and they establish an arbitrary threshold to determine their class or label. In this way, images above the threshold are considered of good quality, while images below the threshold are seen as bad quality. This paper analyzes current literature, and it reviews those attributes able to represent an image, differentiating into three families: specific, general and deep features. Among those which have been proved more competitive, we have selected a representative subset, being our main goal to establish a clear experimental framework. Finally, once features were selected, we have used them for the full AVA dataset. We have to remark that to perform validation we report not only accuracy values, which is not that informative in this case, but also, metrics able to evaluate classification power within imbalanced datasets. We have conducted a series of experiments so that distinct well-known classifiers are learned from data. Like that, this paper provides what we could consider valuable and valid baseline results for the given problem.

  16. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques.

    PubMed

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.

  17. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques

    PubMed Central

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920

  18. Scattering Removal for Finger-Vein Image Restoration

    PubMed Central

    Yang, Jinfeng; Zhang, Ben; Shi, Yihua

    2012-01-01

    Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028

  19. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Y; Wu, P; Mao, T

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT images. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  20. A professional and cost effective digital video editing and image storage system for the operating room.

    PubMed

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  1. Self-recovery reversible image watermarking algorithm

    PubMed Central

    Sun, He; Gao, Shangbing; Jin, Shenghua

    2018-01-01

    The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528

  2. Image Quality Assessment Based on Local Linear Information and Distortion-Specific Compensation.

    PubMed

    Wang, Hanli; Fu, Jie; Lin, Weisi; Hu, Sudeng; Kuo, C-C Jay; Zuo, Lingxuan

    2016-12-14

    Image Quality Assessment (IQA) is a fundamental yet constantly developing task for computer vision and image processing. Most IQA evaluation mechanisms are based on the pertinence of subjective and objective estimation. Each image distortion type has its own property correlated with human perception. However, this intrinsic property may not be fully exploited by existing IQA methods. In this paper, we make two main contributions to the IQA field. First, a novel IQA method is developed based on a local linear model that examines the distortion between the reference and the distorted images for better alignment with human visual experience. Second, a distortion-specific compensation strategy is proposed to offset the negative effect on IQA modeling caused by different image distortion types. These score offsets are learned from several known distortion types. Furthermore, for an image with an unknown distortion type, a Convolutional Neural Network (CNN) based method is proposed to compute the score offset automatically. Finally, an integrated IQA metric is proposed by combining the aforementioned two ideas. Extensive experiments are performed to verify the proposed IQA metric, which demonstrate that the local linear model is useful in human perception modeling, especially for individual image distortion, and the overall IQA method outperforms several state-of-the-art IQA approaches.

  3. Roles of body image-related experiential avoidance and uncommitted living in the link between body image and women's quality of life.

    PubMed

    Trindade, Inês A; Ferreira, Cláudia; Pinto-Gouveia, José

    2018-01-01

    The current study aimed to test whether the associations of body mass index, body image discrepancy, and social comparison based on physical appearance with women's psychological quality of life (QoL) would be explained by the mechanisms of body image-related experiential avoidance and patterns of uncommitted living. The sample was collected from October 2014 to March 2015 and included 737 female college students (aged between 18 and 25 years) who completed validated self-report measures. Results demonstrated that the final path model explained 43% of psychological QoL and revealed an excellent fit. Body image-related experiential avoidance had a meditational role in the association between body image discrepancy and psychological QoL. Further, the link between social comparison based on physical appearance and psychological QoL was partially mediated by body image-related experiential avoidance and uncommitted living. These findings indicate that the key mechanisms of the relationship between body image and young women's QoL were those related to maladaptive emotion regulation. It thus seems that interventions aiming to promote mental health in this population should promote acceptance of internal experiences related to physical appearance (e.g., sensations, thoughts, or emotions) and the engagement in behaviors committed to life values.

  4. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  5. Training models of anatomic shape variability

    PubMed Central

    Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang

    2008-01-01

    Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919

  6. Evaluation of Laser Stabilization and Imaging Systems for LCLS-II - Final Paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barry, Matthew

    2015-08-20

    By combining the top performing commercial laser beam stabilization system with the most ideal optical imaging configuration, the beamline for the Linear Accelerator Coherent Light Source II (LCLS-II) will deliver the highest quality and most stable beam to the cathode. To determine the optimal combination, LCLS-II beamline conditions were replicated and the systems tested with a He-Ne laser. The Guidestar-II and MRC active laser beam stabilization systems were evaluated for their ideal positioning and stability. Both a two and four lens optical imaging configuration was then evaluated for beam imaging quality, magnification properties, and natural stability. In their best performancesmore » when tested over fifteen hours, Guidestar-II kept the beam stable over approximately 70-110um while the MRC system kept it stable over approximately 90-100um. During short periods of time, Guidestar-II kept the beam stable between 10-20um, but was more susceptible to drift over time, while the MRC system maintained the beam between 30-50um with less overall drift. The best optical imaging configuration proved to be a four lens system that images to the iris located in the cathode room and from there, imaged to the cathode. The magnification from the iris to the cathode was 2:1, within an acceptable tolerance to the expected 2.1:1 magnification. The two lens configuration was slightly more stable in small periods of time (less than 10 minutes) without the assistance of a stability system, approximately 55um compared to approximately 70um, but the four lens configurations beam image had a significantly flatter intensity distribution compared to the two lens configuration which had a Gaussian distribution. A final test still needs to be run with both stability systems running at the same time through the four lens system. With this data, the optimal laser beam stabilization system can be determined for the beamline of LCLS-II.« less

  7. Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.

    PubMed

    Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří

    2017-11-10

    We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.

  8. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  9. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  10. Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM

    NASA Astrophysics Data System (ADS)

    Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang

    2016-03-01

    A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.

  11. Diffusion-Weighted Imaging Outside the Brain: Consensus Statement From an ISMRM-Sponsored Workshop

    PubMed Central

    Taouli, Bachir; Beer, Ambros J.; Chenevert, Thomas; Collins, David; Lehman, Constance; Matos, Celso; Padhani, Anwar R.; Rosenkrantz, Andrew B.; Shukla-Dave, Amita; Sigmund, Eric; Tanenbaum, Lawrence; Thoeny, Harriet; Thomassin-Naggara, Isabelle; Barbieri, Sebastiano; Corcuera-Solano, Idoia; Orton, Matthew; Partridge, Savannah C.; Koh, Dow-Mu

    2016-01-01

    The significant advances in magnetic resonance imaging (MRI) hardware and software, sequence design, and postprocessing methods have made diffusion-weighted imaging (DWI) an important part of body MRI protocols and have fueled extensive research on quantitative diffusion outside the brain, particularly in the oncologic setting. In this review, we summarize the most up-to-date information on DWI acquisition and clinical applications outside the brain, as discussed in an ISMRM-sponsored symposium held in April 2015. We first introduce recent advances in acquisition, processing, and quality control; then review scientific evidence in major organ systems; and finally describe future directions. PMID:26892827

  12. A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.

    2015-07-01

    Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.

  13. Optimization of Fat-Reduced Puff Pastry Using Response Surface Methodology.

    PubMed

    Silow, Christoph; Zannini, Emanuele; Axel, Claudia; Belz, Markus C E; Arendt, Elke K

    2017-02-22

    Puff pastry is a high-fat bakery product with fat playing a key role, both during the production process and in the final pastry. In this study, response surface methodology (RSM) was successfully used to evaluate puff pastry quality for the development of a fat-reduced version. The technological parameters modified included the level of roll-in fat, the number of fat layers (50-200) and the final thickness (1.0-3.5 mm) of the laminated dough. Quality characteristics of puff pastry were measured using the Texture Analyzer with an attached Extended Craft Knife (ECK) and Multiple Puncture Probe (MPP), the VolScan and the C-Cell imaging system. The number of fat layers and final dough thickness, in combination with the amount of roll-in fat, had a significant impact on the internal and external structural quality parameters. With technological changes alone, a fat-reduced (≥30%) puff pastry was developed. The qualities of fat-reduced puff pastries were comparable to conventional full-fat (33 wt %) products. A sensory acceptance test revealed no significant differences in taste of fatness or 'liking of mouthfeel'. Additionally, the fat-reduced puff pastry resulted in a significant ( p < 0.05) positive correlation to 'liking of flavor' and overall acceptance by the assessors.

  14. Optimization of Fat-Reduced Puff Pastry Using Response Surface Methodology

    PubMed Central

    Silow, Christoph; Zannini, Emanuele; Axel, Claudia; Belz, Markus C. E.; Arendt, Elke K.

    2017-01-01

    Puff pastry is a high-fat bakery product with fat playing a key role, both during the production process and in the final pastry. In this study, response surface methodology (RSM) was successfully used to evaluate puff pastry quality for the development of a fat-reduced version. The technological parameters modified included the level of roll-in fat, the number of fat layers (50–200) and the final thickness (1.0–3.5 mm) of the laminated dough. Quality characteristics of puff pastry were measured using the Texture Analyzer with an attached Extended Craft Knife (ECK) and Multiple Puncture Probe (MPP), the VolScan and the C-Cell imaging system. The number of fat layers and final dough thickness, in combination with the amount of roll-in fat, had a significant impact on the internal and external structural quality parameters. With technological changes alone, a fat-reduced (≥30%) puff pastry was developed. The qualities of fat-reduced puff pastries were comparable to conventional full-fat (33 wt %) products. A sensory acceptance test revealed no significant differences in taste of fatness or ‘liking of mouthfeel’. Additionally, the fat-reduced puff pastry resulted in a significant (p < 0.05) positive correlation to ‘liking of flavor’ and overall acceptance by the assessors. PMID:28231095

  15. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  16. Feature-Based Retinal Image Registration Using D-Saddle Feature

    PubMed Central

    Hasikin, Khairunnisa; A. Karim, Noor Khairiah; Ahmedy, Fatimah

    2017-01-01

    Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle. PMID:29204257

  17. Incomplete projection reconstruction of computed tomography based on the modified discrete algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei

    2018-02-01

    Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.

  18. Information theoretical assessment of visual communication with wavelet coding

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur

    1995-06-01

    A visual communication channel can be characterized by the efficiency with which it conveys information, and the quality of the images restored from the transmitted data. Efficient data representation requires the use of constraints of the visual communication channel. Our information theoretic analysis combines the design of the wavelet compression algorithm with the design of the visual communication channel. Shannon's communication theory, Wiener's restoration filter, and the critical design factors of image gathering and display are combined to provide metrics for measuring the efficiency of data transmission, and for quantitatively assessing the visual quality of the restored image. These metrics are: a) the mutual information (Eta) between the radiance the radiance field and the restored image, and b) the efficiency of the channel which can be roughly measured by as the ratio (Eta) /H, where H is the average number of bits being used to transmit the data. Huck, et al. (Journal of Visual Communication and Image Representation, Vol. 4, No. 2, 1993) have shown that channels desinged to maximize (Eta) , also maximize. Our assessment provides a framework for designing channels which provide the highest possible visual quality for a given amount of data under the critical design limitations of the image gathering and display devices. Results show that a trade-off exists between the maximum realizable information of the channel and its efficiency: an increase in one leads to a decrease in the other. The final selection of which of these quantities to maximize is, of course, application dependent.

  19. Normalizing Heterogeneous Medical Imaging Data to Measure the Impact of Radiation Dose.

    PubMed

    Silva, Luís A Bastião; Ribeiro, Luís S; Santos, Milton; Neves, Nuno; Francisco, Dulce; Costa, Carlos; Oliveira, José Luis

    2015-12-01

    The production of medical imaging is a continuing trend in healthcare institutions. Quality assurance for planned radiation exposure situations (e.g. X-ray, computer tomography) requires examination-specific set-ups according to several parameters, such as patient's age and weight, body region and clinical indication. These data are normally stored in several formats and with different nomenclatures, which hinder the continuous and automatic monitoring of these indicators and the comparison between several institutions and equipment. This article proposes a framework that aggregates, normalizes and provides different views over collected indicators. The developed tool can be used to improve the quality of radiologic procedures and also for benchmarking and auditing purposes. Finally, a case study and several experimental results related to radiation exposure and productivity are presented and discussed.

  20. Technical Directions In High Resolution Non-Impact Printers

    NASA Astrophysics Data System (ADS)

    Dunn, S. Thomas; Dunn, Patrice M.

    1987-04-01

    There are several factors to consider when addressing the issue of non-impact printer resolution. One will find differences between the imaging resolution and the final output resolution, and most assuradly differences exist between the advertised and actual resolution of many of these systems. Beyond that some of the technical factors that effect the resolution of a system in-clude: . Scan Line Density . Overlap . Spot Size . Energy Profile . Symmetry of Imaging Generally speaking, the user of graphic arts equipment, is best advised to view output to determine the degree of acceptable quality.

  1. Transmissive liquid-crystal device for correcting primary coma aberration and astigmatism in biospecimen in two-photon excitation laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Tanabe, Ayano; Hibi, Terumasa; Ipponjima, Sari; Matsumoto, Kenji; Yokoyama, Masafumi; Kurihara, Makoto; Hashimoto, Nobuyuki; Nemoto, Tomomi

    2016-12-01

    All aberrations produced inside a biospecimen can degrade the quality of a three-dimensional image in two-photon excitation laser scanning microscopy. Previously, we developed a transmissive liquid-crystal device to correct spherical aberrations that improved the image quality of a fixed-mouse-brain slice treated with an optical clearing reagent. In this study, we developed a transmissive device that corrects primary coma aberration and astigmatism. The motivation for this study is that asymmetric aberration can be induced by the shape of a biospecimen and/or by a complicated refractive-index distribution in a sample; this can considerably degrade optical performance even near the sample surface. The device's performance was evaluated by observing fluorescence beads. The device was inserted between the objective lens and microscope revolver and succeeded in improving the spatial resolution and fluorescence signal of a bead image that was originally degraded by asymmetric aberration. Finally, we implemented the device for observing a fixed whole mouse brain with a sloping surface shape and complicated internal refractive-index distribution. The correction with the device improved the spatial resolution and increased the fluorescence signal by ˜2.4×. The device can provide a simple approach to acquiring higher-quality images of biospecimens.

  2. Analysis and modeling of atmospheric turbulence on the high-resolution space optical systems

    NASA Astrophysics Data System (ADS)

    Lili, Jiang; Chen, Xiaomei; Ni, Guoqiang

    2016-09-01

    Modeling and simulation of optical remote sensing system plays an unslightable role in remote sensing mission predictions, imaging system design, image quality assessment. It has already become a hot research topic at home and abroad. Atmospheric turbulence influence on optical systems is attached more and more importance to as technologies of remote sensing are developed. In order to study the influence of atmospheric turbulence on earth observation system, the atmospheric structure parameter was calculated by using the weak atmospheric turbulence model; and the relationship of the atmospheric coherence length and high resolution remote sensing optical system was established; then the influence of atmospheric turbulence on the coefficient r0h of optical remote sensing system of ground resolution was derived; finally different orbit height of high resolution optical system imaging quality affected by atmospheric turbulence was analyzed. Results show that the influence of atmospheric turbulence on the high resolution remote sensing optical system, the resolution of which has reached sub meter level meter or even the 0.5m, 0.35m and even 0.15m ultra in recent years, image quality will be quite serious. In the above situation, the influence of the atmospheric turbulence must be corrected. Simulation algorithms of PSF are presented based on the above results. Experiment and analytical results are posted.

  3. Free-breathing pediatric chest MRI: Performance of self-navigated golden-angle ordered conical ultrashort echo time acquisition.

    PubMed

    Zucker, Evan J; Cheng, Joseph Y; Haldipur, Anshul; Carl, Michael; Vasanawala, Shreyas S

    2018-01-01

    To assess the feasibility and performance of conical k-space trajectory free-breathing ultrashort echo time (UTE) chest magnetic resonance imaging (MRI) versus four-dimensional (4D) flow and effects of 50% data subsampling and soft-gated motion correction. Thirty-two consecutive children who underwent both 4D flow and UTE ferumoxytol-enhanced chest MR (mean age: 5.4 years, range: 6 days to 15.7 years) in one 3T exam were recruited. From UTE k-space data, three image sets were reconstructed: 1) one with all data, 2) one using the first 50% of data, and 3) a final set with soft-gating motion correction, leveraging the signal magnitude immediately after each excitation. Two radiologists in blinded fashion independently scored image quality of anatomical landmarks on a 5-point scale. Ratings were compared using Wilcoxon rank-sum, Wilcoxon signed-ranks, and Kruskal-Wallis tests. Interobserver agreement was assessed with the intraclass correlation coefficient (ICC). For fully sampled UTE, mean scores for all structures were ≥4 (good-excellent). Full UTE surpassed 4D flow for lungs and airways (P < 0.001), with similar pulmonary artery (PA) quality (P = 0.62). 50% subsampling only slightly degraded all landmarks (P < 0.001), as did motion correction. Subsegmental PA visualization was possible in >93% scans for all techniques (P = 0.27). Interobserver agreement was excellent for combined scores (ICC = 0.83). High-quality free-breathing conical UTE chest MR is feasible, surpassing 4D flow for lungs and airways, with equivalent PA visualization. Data subsampling only mildly degraded images, favoring lesser scan times. Soft-gating motion correction overall did not improve image quality. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:200-209. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Image editing with Adobe Photoshop 6.0.

    PubMed

    Caruso, Ronald D; Postel, Gregory C

    2002-01-01

    The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002

  5. Low dose CT image restoration using a database of image patches

    NASA Astrophysics Data System (ADS)

    Ha, Sungsoo; Mueller, Klaus

    2015-01-01

    Reducing the radiation dose in CT imaging has become an active research topic and many solutions have been proposed to remove the significant noise and streak artifacts in the reconstructed images. Most of these methods operate within the domain of the image that is subject to restoration. This, however, poses limitations on the extent of filtering possible. We advocate to take into consideration the vast body of external knowledge that exists in the domain of already acquired medical CT images, since after all, this is what radiologists do when they examine these low quality images. We can incorporate this knowledge by creating a database of prior scans, either of the same patient or a diverse corpus of different patients, to assist in the restoration process. Our paper follows up on our previous work that used a database of images. Using images, however, is challenging since it requires tedious and error prone registration and alignment. Our new method eliminates these problems by storing a diverse set of small image patches in conjunction with a localized similarity matching scheme. We also empirically show that it is sufficient to store these patches without anatomical tags since their statistics are sufficiently strong to yield good similarity matches from the database and as a direct effect, produce image restorations of high quality. A final experiment demonstrates that our global database approach can recover image features that are difficult to preserve with conventional denoising approaches.

  6. Composite SAR imaging using sequential joint sparsity

    NASA Astrophysics Data System (ADS)

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.

    2017-06-01

    This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.

  7. Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT-SVD.

    PubMed

    Bhandari, A K; Soni, V; Kumar, A; Singh, G K

    2014-07-01

    This paper presents a new contrast enhancement approach which is based on Cuckoo Search (CS) algorithm and DWT-SVD for quality improvement of the low contrast satellite images. The input image is decomposed into the four frequency subbands through Discrete Wavelet Transform (DWT), and CS algorithm used to optimize each subband of DWT and then obtains the singular value matrix of the low-low thresholded subband image and finally, it reconstructs the enhanced image by applying IDWT. The singular value matrix employed intensity information of the particular image, and any modification in the singular values changes the intensity of the given image. The experimental results show superiority of the proposed method performance in terms of PSNR, MSE, Mean and Standard Deviation over conventional and state-of-the-art techniques. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Pansharpening in coastal ecosystems using Worldview-2 imagery

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Marcello-Ruiz, Javier; Gonzalo-Martin, Consuelo

    2016-10-01

    Both climate change and anthropogenic pressure impacts are producing a declining in ecosystem natural resources. In this work, a vulnerable coastal ecosystem, Maspalomas Natural Reserve (Canary Islands, Spain), is analyzed. The development of advanced image processing techniques, applied to new satellites with very high resolution sensors (VHR), are essential to obtain accurate and systematic information about such natural areas. Thus, remote sensing offers a practical and cost-effective means for a good environmental management although some improvements are needed by the application of pansharpening techniques. A preliminary assessment was performed selecting classical and new algorithms that could achieve good performance with WorldView-2 imagery. Moreover, different quality indices were used in order to asses which pansharpening technique gives a better fused image. A total of 7 pansharpening algorithms were analyzed using 6 spectral and spatial quality indices. The quality assessment was implemented for the whole set of multispectral bands and for those bands covered by the wavelength range of the panchromatic image and outside of it. After an extensive evaluation, the most suitable algorithm was the Weighted Wavelet `à trous' through Fractal Dimension Maps technique which provided the best compromise between the spectral and spatial quality for the image. Finally, Quality Map Analysis was performed in order to study the fusion in each band at local level. As conclusion, novel analysis has been conducted covering the evaluation of fusion methods in shallow water areas. Hence, the excellent results provided by this study have been applied to the generation of challenging thematic maps of coastal and dunes protected areas.

  9. Imaging of Mercury and Venus from a flyby

    USGS Publications Warehouse

    Murray, B.C.; Belton, M.J.S.; Danielson, G. Edward; Davies, M.E.; Kuiper, G.P.; O'Leary, B. T.; Suomi, V.E.; Trask, N.J.

    1971-01-01

    This paper describes the results of study of an imaging experiment planned for the 1973 Mariner Venus/Mercury flyby mission. Scientific objectives, mission constraints, analysis of alternative systems, and the rationale for final choice are presented. Severe financial constraints ruled out the best technical alternative for flyby imaging, a film/readout system, or even significant re-design of previous Mariner vidicon camera/tape recorder systems. The final selection was a vidicon camera quite similar to that used for Mariner Mars 1971, but with the capability of real time transmission during the Venus and Mercury flybys. Real time data return became possible through dramatic increase in the communications bandwidth at only modest sacrifice in the quality of the returned pictures. Two identical long focal length cameras (1500 mm) were selected and it will be possible to return several thousand pictures from both planets at resolutions ranging from equivalent to Earthbased to tenths of a kilometer at encounter. Systematic high resolution ultraviolet photography of Venus is planned after encounter in an attempt to understand the nature of the mysterious ultraviolet markings and their apparent 4- to 5-day rotation period. Full disk coverage in mosaics will produce pictures of both planets similar in quality to Earthbased telescopic pictures of the Moon. The increase of resolution, more than three orders of magnitude, will yield an exciting first look at two planets whose closeup appearance is unknown. ?? 1971.

  10. Data acquisition with a positron emission tomograph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freifelder, R.; Karp, J.S.

    1997-12-31

    Positron Emission Tomography (PET) is a clinical imaging modality used in Nuclear Medicine. PET measures functionality rather than anatomical features and is therefore invaluable in the treatment of diseases which are characterized by functional changes in organs rather than anatomical changes. Typical diseases for which PET is used are cancer, epilepsy, and heart disease. While the scanners are not very complex, the performance demands on the devices are high. Excellent spatial resolution, 4-5 mm, and high sensitivity are key to maintaining high image quality. Compensation or suppression of scattered radiation is also necessary for good image quality. The ability tomore » acquire data under high counting rates is also necessary in order to minimize the injected dose to the patient, minimize the patient`s time in the scanner, and finally to minimize blurring due to patient motion. We have adapted various techniques in our data acquisition system which will be reported on in this talk. These include pulse clipping using lumped delay lines, flash ADCs with short sampling time, the use of a local positioning algorithm to limit the number of data words being used in subsequent second level software triggers and calculations, and finally the use of high speed dedicated calculator boards for on-line rebinning and reduction of the data. Modifications to the system to allow for transmission scanning will also be discussed.« less

  11. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include themore » following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image registration.« less

  12. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  13. A laser-based vision system for weld quality inspection.

    PubMed

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.

  14. A Laser-Based Vision System for Weld Quality Inspection

    PubMed Central

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308

  15. Robust scatter correction method for cone-beam CT using an interlacing-slit plate

    NASA Astrophysics Data System (ADS)

    Huang, Kui-Dong; Xu, Zhe; Zhang, Ding-Hua; Zhang, Hua; Shi, Wen-Long

    2016-06-01

    Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corrected slice images. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Aeronautical Science Fund of China (2014ZE53059), and Fundamental Research Funds for Central Universities of China (3102014KYJD022)

  16. Research on marine and freshwater fish identification model based on hyper-spectral imaging technology

    NASA Astrophysics Data System (ADS)

    Fu, Yan; Guo, Pei-yuan; Xiang, Ling-zi; Bao, Man; Chen, Xing-hai

    2013-08-01

    With the gradually mature of hyper spectral image technology, the application of the meat nondestructive detection and recognition has become one of the current research focuses. This paper for the study of marine and freshwater fish by the pre-processing and feature extraction of the collected spectral curve data, combined with BP network structure and LVQ network structure, a predictive model of hyper spectral image data of marine and freshwater fish has been initially established and finally realized the qualitative analysis and identification of marine and freshwater fish quality. The results of this study show that hyper spectral imaging technology combined with the BP and LVQ Artificial Neural Network Model can be used for the identification of marine and freshwater fish detection. Hyper-spectral data acquisition can be carried out without any pretreatment of the samples, thus hyper-spectral imaging technique is the lossless, high- accuracy and rapid detection method for quality of fish. In this study, only 30 samples are used for the exploratory qualitative identification of research, although the ideal study results are achieved, we will further increase the sample capacity to take the analysis of quantitative identification and verify the feasibility of this theory.

  17. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  18. A review of consensus test methods for established medical imaging modalities and their implications for optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Pfefer, Joshua; Agrawal, Anant

    2012-03-01

    In recent years there has been increasing interest in development of consensus, tissue-phantom-based approaches for assessment of biophotonic imaging systems, with the primary goal of facilitating clinical translation of novel optical technologies. Well-characterized test methods based on tissue phantoms can provide useful tools for performance assessment, thus enabling standardization and device inter-comparison during preclinical development as well as quality assurance and re-calibration in the clinical setting. In this review, we study the role of phantom-based test methods as described in consensus documents such as international standards for established imaging modalities including X-ray CT, MRI and ultrasound. Specifically, we focus on three image quality characteristics - spatial resolution, spatial measurement accuracy and image uniformity - and summarize the terminology, metrics, phantom design/construction approaches and measurement/analysis procedures used to assess these characteristics. Phantom approaches described are those in routine clinical use and tend to have simplified morphology and biologically-relevant physical parameters. Finally, we discuss the potential for applying knowledge gained from existing consensus documents in the development of standardized, phantom-based test methods for optical coherence tomography.

  19. Derivation of the scan time requirement for maintaining a consistent PET image quality

    NASA Astrophysics Data System (ADS)

    Kim, Jin Su; Lee, Jae Sung; Kim, Seok-Ki

    2015-05-01

    Objectives: the image quality of PET for larger patients is relatively poor, even though the injection dose is optimized considering the NECR characteristics of the PET scanner. This poor image quality is due to the lower level of maximum NECR that can be achieved in these large patients. The aim of this study was to optimize the PET scan time to obtain a consistent PET image quality regardless of the body size, based on the relationship between the patient specific NECR (pNECR) and body weight. Methods: eighty patients (M/F=53/27, body weight: 059 ± 1 kg) underwent whole-body FDG PET scans using a Philips GEMINI GS PET/CT scanner after an injection of 0.14 mCi/kg FDG. The relationship between the scatter fraction (SF) and body weight was determined by repeated Monte Carlo simulations using a NEMA scatter phantom, the size of which varied according to the relationship between the abdominal circumference and body weight. Using this information, the pNECR was calculated from the prompt and delayed PET sinograms to obtain the prediction equation of NECR vs. body weight. The time scaling factor (FTS) for the scan duration was finally derived to make PET images with equivalent SNR levels. Results: the SF and NECR had the following nonlinear relationships with the body weight: SF=0.15 ṡ body weight0.3 and NECR = 421.36 (body weight)-0.84. The equation derived for FTS was 0.01ṡ body weight + 0.2, which means that, for example, a 120-kg person should be scanned 1.8 times longer than a 70 kg person, or the scan time for a 40-kg person can be reduced by 30%. Conclusion: the equation of the relative time demand derived in this study will be useful for maintaining consistent PET image quality in clinics.

  20. Geodetic glacier mass balances at the push of a button: application of Structure from Motion technology on aerial images in mountain regions

    NASA Astrophysics Data System (ADS)

    Bolch, T.; Mölg, N.

    2017-12-01

    The application of Structure-from-Motion (SfM) to generate digital terrain models (DTMs) derived out of images from various kinds of sources has strongly increased in recent years. The major reason for this is its easy-to-use handling in comparison to conventional photogrammetry. In glaciology, DTMs are intensely used, among others, to calculate the geodetic mass balances. Few studies investigated the application of SfM to aerial images in mountainous terrain and results look promising. We tested this technique in a demanding environment in the Swiss Alps including very steep slopes, snow and ice covered terrain. SfM (using the commercial software packages of Agisoft Photoscan and Pix4DMapper) and conventional photogrammetry (ERDAS Photogrammetry) were applied on archival aerial images for nine dates between 1946 and 2005 the results were compared regarding bundle adjustment and final DTM quality. The overall precision of the DTMs could be defined with the use of a modern, high-quality reference DTM by Swisstopo. Results suggest a high performance of SfM to produce DTMs of similar quality as conventional photogrammetry. A ground resolution of high quality (little noise and artefacts) can be up to 50% higher, with 3-6 times less user effort. However, the controls on the commercial SfM software packages are limited in comparison to ERDAS Photogrammetry. SfM performs less reliably when few images with little overlap are processed. Overall, the uncertainty of DTMs from the different software are comparable and mostly within the uncertainty range of the reference DTM, making them highly valuable for glaciological purposes. Even though SfM facilitates the largely automated production of high quality DTMs, the user is not exempt from a thorough quality check, at best with reference data where available.

  1. Radioactive Quality Evaluation and Cross Validation of Data from the HJ-1A/B Satellites' CCD Sensors

    PubMed Central

    Zhang, Xin; Zhao, Xiang; Liu, Guodong; Kang, Qian; Wu, Donghai

    2013-01-01

    Data from multiple sensors are frequently used in Earth science to gain a more complete understanding of spatial information changes. Higher quality and mutual consistency are prerequisites when multiple sensors are jointly used. The HJ-1A/B satellites successfully launched on 6 September 2008. There are four charge-coupled device (CCD) sensors with uniform spatial resolutions and spectral range onboard the HJ-A/B satellites. Whether these data are keeping consistency is a major issue before they are used. This research aims to evaluate the data consistency and radioactive quality from the four CCDs. First, images of urban, desert, lake and ocean are chosen as the objects of evaluation. Second, objective evaluation variables, such as mean, variance and angular second moment, are used to identify image performance. Finally, a cross validation method are used to ensure the correlation of the data from the four HJ-1A/B CCDs and that which is gathered from the moderate resolution imaging spectro-radiometer (MODIS). The results show that the image quality of HJ-1A/B CCDs is stable, and the digital number distribution of CCD data is relatively low. In cross validation with MODIS, the root mean square errors of bands 1, 2 and 3 range from 0.055 to 0.065, and for band 4 it is 0.101. The data from HJ-1A/B CCD have better consistency. PMID:23881127

  2. Radioactive quality evaluation and cross validation of data from the HJ-1A/B satellites' CCD sensors.

    PubMed

    Zhang, Xin; Zhao, Xiang; Liu, Guodong; Kang, Qian; Wu, Donghai

    2013-07-05

    Data from multiple sensors are frequently used in Earth science to gain a more complete understanding of spatial information changes. Higher quality and mutual consistency are prerequisites when multiple sensors are jointly used. The HJ-1A/B satellites successfully launched on 6 September 2008. There are four charge-coupled device (CCD) sensors with uniform spatial resolutions and spectral range onboard the HJ-A/B satellites. Whether these data are keeping consistency is a major issue before they are used. This research aims to evaluate the data consistency and radioactive quality from the four CCDs. First, images of urban, desert, lake and ocean are chosen as the objects of evaluation. Second, objective evaluation variables, such as mean, variance and angular second moment, are used to identify image performance. Finally, a cross validation method are used to ensure the correlation of the data from the four HJ-1A/B CCDs and that which is gathered from the moderate resolution imaging spectro-radiometer (MODIS). The results show that the image quality of HJ-1A/B CCDs is stable, and the digital number distribution of CCD data is relatively low. In cross validation with MODIS, the root mean square errors of bands 1, 2 and 3 range from 0.055 to 0.065, and for band 4 it is 0.101. The data from HJ-1A/B CCD have better consistency.

  3. Uses of megavoltage digital tomosynthesis in radiotherapy

    NASA Astrophysics Data System (ADS)

    Sarkar, Vikren

    With the advent of intensity modulated radiotherapy, radiation treatment plans are becoming more conformal to the tumor with the decreasing margins. It is therefore of prime importance that the patient be positioned correctly prior to treatment. Therefore, image guided treatment is necessary for intensity modulated radiotherapy plans to be implemented successfully. Current advanced imaging devices require costly hardware and software upgrade, and radiation imaging solutions, such as cone beam computed tomography, may introduce extra radiation dose to the patient in order to acquire better quality images. Thus, there is a need to extend current existing imaging device ability and functions while reducing cost and radiation dose. Existing electronic portal imaging devices can be used to generate computed tomography-like tomograms through projection images acquired over a small angle using the technique of cone-beam digital tomosynthesis. Since it uses a fraction of the images required for computed tomography reconstruction, use of this technique correspondingly delivers only a fraction of the imaging dose to the patient. Furthermore, cone-beam digital tomosynthesis can be offered as a software-only solution as long as a portal imaging device is available. In this study, the feasibility of performing digital tomosynthesis using individually-acquired megavoltage images from a charge coupled device-based electronic portal imaging device was investigated. Three digital tomosynthesis reconstruction algorithms, the shift-and-add, filtered back-projection, and simultaneous algebraic reconstruction technique, were compared considering the final image quality and radiation dose during imaging. A software platform, DART, was created using a combination of the Matlab and C++ languages. The platform allows for the registration of a reference Cone Beam Digital Tomosynthesis (CBDT) image against a daily acquired set to determine how to shift the patient prior to treatment. Finally, the software was extended to investigate if the digital tomosynthesis dataset could be used in an adaptive radiotherapy regimen through the use of the Pinnacle treatment planning software to recalculate dose delivered. The feasibility study showed that the megavoltage CBDT visually agreed with corresponding megavoltage computed tomography images. The comparative study showed that the best compromise between imaging quality and imaging dose is obtained when 11 projection images, acquired over an imaging angle of 40°, are used with the filtered back-projection algorithm. DART was successfully used to register reference and daily image sets to within 1 mm in-plane and 2.5 mm out of plane. The DART platform was also effectively used to generate updated files that the Pinnacle treatment planning system used to calculate updated dose in a rigidly shifted patient. These doses were then used to calculate a cumulative dose distribution that could be used by a physician as reference to decide when the treatment plan should be updated. In conclusion, this study showed that a software solution is possible to extend existing electronic portal imaging devices to function as cone-beam digital tomosynthesis devices and achieve daily requirement for image guided intensity modulated radiotherapy treatments. The DART platform also has the potential to be used as a part of adaptive radiotherapy solution.

  4. Automatic Camera Orientation and Structure Recovery with Samantha

    NASA Astrophysics Data System (ADS)

    Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.

    2011-09-01

    SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  5. Flash X-ray with image enhancement applied to combustion events

    NASA Astrophysics Data System (ADS)

    White, K. J.; McCoy, D. G.

    1983-10-01

    Flow visualization of interior ballistic processes by use of X-rays has placed more stringent requirements on flash X-ray techniques. The problem of improving radiographic contrast of propellants in X-ray transparent chambers was studied by devising techniques for evaluating, measuring and reducing the effects of scattering from both the test object and structures in the test area. X-ray film and processing is reviewed and techniques for evaluating and calibrating these are outlined. Finally, after X-ray techniques were optimized, the application of image enhancement processing which can improve image quality is described. This technique was applied to X-ray studies of the combustion of very high burning rate (VHBR) propellants and stick propellant charges.

  6. Research on Bayes matting algorithm based on Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Jiang, Shan; Han, Cheng; Zhang, Chao; Jiang, Zhengang

    2015-12-01

    The digital matting problem is a classical problem of imaging. It aims at separating non-rectangular foreground objects from a background image, and compositing with a new background image. Accurate matting determines the quality of the compositing image. A Bayesian matting Algorithm Based on Gaussian Mixture Model is proposed to solve this matting problem. Firstly, the traditional Bayesian framework is improved by introducing Gaussian mixture model. Then, a weighting factor is added in order to suppress the noises of the compositing images. Finally, the effect is further improved by regulating the user's input. This algorithm is applied to matting jobs of classical images. The results are compared to the traditional Bayesian method. It is shown that our algorithm has better performance in detail such as hair. Our algorithm eliminates the noise well. And it is very effectively in dealing with the kind of work, such as interested objects with intricate boundaries.

  7. Software electron counting for low-dose scanning transmission electron microscopy.

    PubMed

    Mittelberger, Andreas; Kramberger, Christian; Meyer, Jannik C

    2018-05-01

    The performance of the detector is of key importance for low-dose imaging in transmission electron microscopy, and counting every single electron can be considered as the ultimate goal. In scanning transmission electron microscopy, low-dose imaging can be realized by very fast scanning, however, this also introduces artifacts and a loss of resolution in the scan direction. We have developed a software approach to correct for artifacts introduced by fast scans, making use of a scintillator and photomultiplier response that extends over several pixels. The parameters for this correction can be directly extracted from the raw image. Finally, the images can be converted into electron counts. This approach enables low-dose imaging in the scanning transmission electron microscope via high scan speeds while retaining the image quality of artifact-free slower scans. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Multi-Focus Image Fusion Based on NSCT and NSST

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen

    2015-12-01

    In this paper, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) is proposed. The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients. Then, the average method is used to fuse low frequency coefficient of the NSCT. To obtain more accurate salience measurement, the high frequency coefficients of the NSST and NSCT are combined to measure salience. The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients. Finally, the fused image is reconstructed by the inverse NSCT. We adopt three metrics (Q AB/F , Q e and Q w ) to evaluate the quality of fused images. The experimental results demonstrate that the proposed method outperforms other methods. It retains highly detailed edges and contours.

  9. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  10. Integrated thermal disturbance analysis of optical system of astronomical telescope

    NASA Astrophysics Data System (ADS)

    Yang, Dehua; Jiang, Zibo; Li, Xinnan

    2008-07-01

    During operation, astronomical telescope will undergo thermal disturbance, especially more serious in solar telescope, which may cause degradation of image quality. As drives careful thermal load investigation and measure applied to assess its effect on final image quality during design phase. Integrated modeling analysis is boosting the process to find comprehensive optimum design scheme by software simulation. In this paper, we focus on the Finite Element Analysis (FEA) software-ANSYS-for thermal disturbance analysis and the optical design software-ZEMAX-for optical system design. The integrated model based on ANSYS and ZEMAX is briefed in the first from an overview of point. Afterwards, we discuss the establishment of thermal model. Complete power series polynomial with spatial coordinates is introduced to present temperature field analytically. We also borrow linear interpolation technique derived from shape function in finite element theory to interface the thermal model and structural model and further to apply the temperatures onto structural model nodes. Thereby, the thermal loads are transferred with as high fidelity as possible. Data interface and communication between the two softwares are discussed mainly on mirror surfaces and hence on the optical figure representation and transformation. We compare and comment the two different methods, Zernike polynomials and power series expansion, for representing and transforming deformed optical surface to ZEMAX. Additionally, these methods applied to surface with non-circular aperture are discussed. At the end, an optical telescope with parabolic primary mirror of 900 mm in diameter is analyzed to illustrate the above discussion. Finite Element Model with most interested parts of the telescope is generated in ANSYS with necessary structural simplification and equivalence. Thermal analysis is performed and the resulted positions and figures of the optics are to be retrieved and transferred to ZEMAX, and thus final image quality is evaluated with thermal disturbance.

  11. A fingerprint classification algorithm based on combination of local and global information

    NASA Astrophysics Data System (ADS)

    Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

    2011-12-01

    Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

  12. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  13. Preparing Colorful Astronomical Images III: Cosmetic Cleaning

    NASA Astrophysics Data System (ADS)

    Frattare, L. M.; Levay, Z. G.

    2003-12-01

    We present cosmetic cleaning techniques for use with mainstream graphics software (Adobe Photoshop) to produce presentation-quality images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope when producing photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to discuss the treatment of various detector-attributed artifacts such as cosmic rays, chip seams, gaps, optical ghosts, diffraction spikes and the like. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to final presentation images. Other pixel-to-pixel applications such as filter smoothing and global noise reduction will be discussed.

  14. Diffraction imaging for in situ characterization of double-crystal X-ray monochromators

    DOE PAGES

    Stoupin, Stanislav; Liu, Zunping; Heald, Steve M.; ...

    2015-10-30

    In this paper, imaging of the Bragg-reflected X-ray beam is proposed and validated as an in situ method for characterization of the performance of double-crystal monochromators under the heat load of intense synchrotron radiation. A sequence of images is collected at different angular positions on the reflectivity curve of the second crystal and analyzed. The method provides rapid evaluation of the wavefront of the exit beam, which relates to local misorientation of the crystal planes along the beam footprint on the thermally distorted first crystal. The measured misorientation can be directly compared with the results of finite element analysis. Finally,more » the imaging method offers an additional insight into the local intrinsic crystal quality over the footprint of the incident X-ray beam.« less

  15. Research on the range side lobe suppression method for modulated stepped frequency radar signals

    NASA Astrophysics Data System (ADS)

    Liu, Yinkai; Shan, Tao; Feng, Yuan

    2018-05-01

    The magnitude of time-domain range sidelobe of modulated stepped frequency radar affects the imaging quality of inverse synthetic aperture radar (ISAR). In this paper, the cause of high sidelobe in modulated stepped frequency radar imaging is analyzed first in real environment. Then, the chaos particle swarm optimization (CPSO) is used to select the amplitude and phase compensation factors according to the minimum sidelobe criterion. Finally, the compensated one-dimensional range images are obtained. Experimental results show that the amplitude-phase compensation method based on CPSO algorithm can effectively reduce the sidelobe peak value of one-dimensional range images, which outperforms the common sidelobe suppression methods and avoids the coverage of weak scattering points by strong scattering points due to the high sidelobes.

  16. An excitation wavelength-scanning spectral imaging system for preclinical imaging

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas; Jiang, Yanan; Patsekin, Valery; Rajwa, Bartek; Robinson, J. Paul

    2008-02-01

    Small-animal fluorescence imaging is a rapidly growing field, driven by applications in cancer detection and pharmaceutical therapies. However, the practical use of this imaging technology is limited by image-quality issues related to autofluorescence background from animal tissues, as well as attenuation of the fluorescence signal due to scatter and absorption. To combat these problems, spectral imaging and analysis techniques are being employed to separate the fluorescence signal from background autofluorescence. To date, these technologies have focused on detecting the fluorescence emission spectrum at a fixed excitation wavelength. We present an alternative to this technique, an imaging spectrometer that detects the fluorescence excitation spectrum at a fixed emission wavelength. The advantages of this approach include increased available information for discrimination of fluorescent dyes, decreased optical radiation dose to the animal, and ability to scan a continuous wavelength range instead of discrete wavelength sampling. This excitation-scanning imager utilizes an acousto-optic tunable filter (AOTF), with supporting optics, to scan the excitation spectrum. Advanced image acquisition and analysis software has also been developed for classification and unmixing of the spectral image sets. Filtering has been implemented in a single-pass configuration with a bandwidth (full width at half maximum) of 16nm at 550nm central diffracted wavelength. We have characterized AOTF filtering over a wide range of incident light angles, much wider than has been previously reported in the literature, and we show how changes in incident light angle can be used to attenuate AOTF side lobes and alter bandwidth. A new parameter, in-band to out-of-band ratio, was defined to assess the quality of the filtered excitation light. Additional parameters were measured to allow objective characterization of the AOTF and the imager as a whole. This is necessary for comparing the excitation-scanning imager to other spectral and fluorescence imaging technologies. The effectiveness of the hyperspectral imager was tested by imaging and analysis of mice with injected fluorescent dyes. Finally, a discussion of the optimization of spectral fluorescence imagers is given, relating the effects of filter quality on fluorescence images collected and the analysis outcome.

  17. MO-DE-207A-05: Dictionary Learning Based Reconstruction with Low-Rank Constraint for Low-Dose Spectral CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Q; Stanford University School of Medicine, Stanford, CA; Liu, H

    Purpose: Spectral CT enabled by an energy-resolved photon-counting detector outperforms conventional CT in terms of material discrimination, contrast resolution, etc. One reconstruction method for spectral CT is to generate a color image from a reconstructed component in each energy channel. However, given the radiation dose, the number of photons in each channel is limited, which will result in strong noise in each channel and affect the final color reconstruction. Here we propose a novel dictionary learning method for spectral CT that combines dictionary-based sparse representation method and the patch based low-rank constraint to simultaneously improve the reconstruction in each channelmore » and to address the inter-channel correlations to further improve the reconstruction. Methods: The proposed method has two important features: (1) guarantee of the patch based sparsity in each energy channel, which is the result of the dictionary based sparse representation constraint; (2) the explicit consideration of the correlations among different energy channels, which is realized by patch-by-patch nuclear norm-based low-rank constraint. For each channel, the dictionary consists of two sub-dictionaries. One is learned from the average of the images in all energy channels, and the other is learned from the average of the images in all energy channels except the current channel. With average operation to reduce noise, these two dictionaries can effectively preserve the structural details and get rid of artifacts caused by noise. Combining them together can express all structural information in current channel. Results: Dictionary learning based methods can obtain better results than FBP and the TV-based method. With low-rank constraint, the image quality can be further improved in the channel with more noise. The final color result by the proposed method has the best visual quality. Conclusion: The proposed method can effectively improve the image quality of low-dose spectral CT. This work is partially supported by the National Natural Science Foundation of China (No. 61302136), and the Natural Science Basic Research Plan in Shaanxi Province of China (No. 2014JQ8317).« less

  18. Third order harmonic imaging for biological tissues using three phase-coded pulses.

    PubMed

    Ma, Qingyu; Gong, Xiufen; Zhang, Dong

    2006-12-22

    Compared to the fundamental and the second harmonic imaging, the third harmonic imaging shows significant improvements in image quality due to the better resolution, but it is degraded by the lower sound pressure and signal-to-noise ratio (SNR). In this study, a phase-coded pulse technique is proposed to selectively enhance the sound pressure of the third harmonic by 9.5 dB whereas the fundamental and the second harmonic components are efficiently suppressed and SNR is also increased by 4.7 dB. Based on the solution of the KZK nonlinear equation, the axial and lateral beam profiles of harmonics radiated from a planar piston transducer were theoretically simulated and experimentally examined. Finally, the third harmonic images using this technique were performed for several biological tissues and compared with the images obtained by the fundamental and the second harmonic imaging. Results demonstrate that the phase-coded pulse technique yields a dramatically cleaner and sharper contrast image.

  19. Multimodal Medical Image Fusion by Adaptive Manifold Filter.

    PubMed

    Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.

  20. Motion-Blurred Particle Image Restoration for On-Line Wear Monitoring

    PubMed Central

    Peng, Yeping; Wu, Tonghai; Wang, Shuo; Kwok, Ngaiming; Peng, Zhongxiao

    2015-01-01

    On-line images of wear debris contain important information for real-time condition monitoring, and a dynamic imaging technique can eliminate particle overlaps commonly found in static images, for instance, acquired using ferrography. However, dynamic wear debris images captured in a running machine are unavoidably blurred because the particles in lubricant are in motion. Hence, it is difficult to acquire reliable images of wear debris with an adequate resolution for particle feature extraction. In order to obtain sharp wear particle images, an image processing approach is proposed. Blurred particles were firstly separated from the static background by utilizing a background subtraction method. Second, the point spread function was estimated using power cepstrum to determine the blur direction and length. Then, the Wiener filter algorithm was adopted to perform image restoration to improve the image quality. Finally, experiments were conducted with a large number of dynamic particle images to validate the effectiveness of the proposed method and the performance of the approach was also evaluated. This study provides a new practical approach to acquire clear images for on-line wear monitoring. PMID:25856328

  1. Clustering of Farsi sub-word images for whole-book recognition

    NASA Astrophysics Data System (ADS)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-01-01

    Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.

  2. Applications of the JPEG standard in a medical environment

    NASA Astrophysics Data System (ADS)

    Wittenberg, Ulrich

    1993-10-01

    JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.

  3. An automated method for tracking clouds in planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Luz, D.; Berry, D. L.; Roos-Serote, M.

    2008-05-01

    We present an automated method for cloud tracking which can be applied to planetary images. The method is based on a digital correlator which compares two or more consecutive images and identifies patterns by maximizing correlations between image blocks. This approach bypasses the problem of feature detection. Four variations of the algorithm are tested on real cloud images of Jupiter's white ovals from the Galileo mission, previously analyzed in Vasavada et al. [Vasavada, A.R., Ingersoll, A.P., Banfield, D., Bell, M., Gierasch, P.J., Belton, M.J.S., Orton, G.S., Klaasen, K.P., Dejong, E., Breneman, H.H., Jones, T.J., Kaufman, J.M., Magee, K.P., Senske, D.A. 1998. Galileo imaging of Jupiter's atmosphere: the great red spot, equatorial region, and white ovals. Icarus, 135, 265, doi:10.1006/icar.1998.5984]. Direct correlation, using the sum of squared differences between image radiances as a distance estimator (baseline case), yields displacement vectors very similar to this previous analysis. Combining this distance estimator with the method of order ranks results in a technique which is more robust in the presence of outliers and noise and of better quality. Finally, we introduce a distance metric which, combined with order ranks, provides results of similar quality to the baseline case and is faster. The new approach can be applied to data from a number of space-based imaging instruments with a non-negligible gain in computing time.

  4. Studies on design of 351  nm focal plane diagnostic system prototype and focusing characteristic of SGII-upgraded facility at half achievable energy performance.

    PubMed

    Liu, Chong; Ji, Lailin; Yang, Lin; Zhao, Dongfeng; Zhang, Yanfeng; Liu, Dong; Zhu, Baoqiang; Lin, Zunqi

    2016-04-01

    In order to obtain the intensity distribution of a 351 nm focal spot and smoothing by spectral dispersion (SSD) focal plane profile of a SGII-upgraded facility, a type of off-axis imaging system with three spherical mirrors, suitable for a finite distance source point to be imaged near the diffraction limit has been designed. The quality factor of the image system is 1.6 times of the diffraction limit tested by a 1053 nm point source. Because of the absence of a 351 nm point source, we can use a Collins diffraction imaging integral with respect to λ=351  nm, corresponding to a quality factor that is 3.8 times the diffraction limit at 351 nm. The calibration results show that at least the range of ±10  mrad of view field angle and ±50  mm along the axial direction around the optimum object distance can be satisfied with near diffraction limited image that is consistent with the design value. Using this image system, the No. 2 beam of the SGII-upgraded facility has been tested. The test result of the focal spot of final optics assembly (FOA) at 351 nm indicates that about 80% of energy is encompassed in 14.1 times the diffraction limit, while the output energy of the No. 2 beam is 908 J at 1053 nm. According to convolution theorem, the true value of a 351 nm focal spot of FOA is about 12 times the diffraction limit because of the influence of the quality factor. Further experimental studies indicate that the RMS value along the smoothing direction is less than 15.98% in the SSD spot test experiment. Computer simulations show that the quality factor of the image system used in the experiment has almost no effect on the SSD focal spot test. The image system can remarkably distort the SSD focal spot distribution under the circumstance of the quality factor 15 times worse than the diffraction limit. The distorted image shows a steep slope in the contour of the SSD focal spot along the smoothing direction that otherwise has a relatively flat top region around the focal spot center.

  5. Teleretinal screening for diabetic retinopathy in six Los Angeles urban safety-net clinics: final study results.

    PubMed

    Ogunyemi, Omolola; George, Sheba; Patty, Lauren; Teklehaimanot, Senait; Baker, Richard

    2013-01-01

    In a previous paper, we presented initial findings from a study on the feasibility and challenges of implementing teleretinal screening for diabetic retinopathy in an urban safety net setting facing eyecare specialist shortages. This paper presents some final results from that study, which involved six South Los Angeles safety net clinics. A total of 2,732 unique patients were screened for diabetic retinopathy by three ophthalmologist readers, with 1035 receiving a recommendation for referral to specialty care. Referrals included 48 for proliferative diabetic retinopathy, 115 for severe non-proliferative diabetic retinopathy (NPDR), 247 for moderate NPDR, 246 for mild NPDR, 97 for clinically significant macular edema, and 282 for a non-diabetic condition, such as glaucoma. Image quality was also assessed, with ophthalmologist readers grading 4% to 13% of retinal images taken at the different clinics as being inadequate for any diagnostic interpretation.

  6. Teleretinal Screening for Diabetic Retinopathy in Six Los Angeles Urban Safety-Net Clinics: Final Study Results

    PubMed Central

    Ogunyemi, Omolola; George, Sheba; Patty, Lauren; Teklehaimanot, Senait; Baker, Richard

    2013-01-01

    In a previous paper, we presented initial findings from a study on the feasibility and challenges of implementing teleretinal screening for diabetic retinopathy in an urban safety net setting facing eyecare specialist shortages. This paper presents some final results from that study, which involved six South Los Angeles safety net clinics. A total of 2,732 unique patients were screened for diabetic retinopathy by three ophthalmologist readers, with 1035 receiving a recommendation for referral to specialty care. Referrals included 48 for proliferative diabetic retinopathy, 115 for severe non-proliferative diabetic retinopathy (NPDR), 247 for moderate NPDR, 246 for mild NPDR, 97 for clinically significant macular edema, and 282 for a non-diabetic condition, such as glaucoma. Image quality was also assessed, with ophthalmologist readers grading 4% to 13% of retinal images taken at the different clinics as being inadequate for any diagnostic interpretation. PMID:24551394

  7. D City Transformations by Time Series of Aerial Images

    NASA Astrophysics Data System (ADS)

    Adami, A.

    2015-02-01

    Recent photogrammetric applications, based on dense image matching algorithms, allow to use not only images acquired by digital cameras, amateur or not, but also to recover the vast heritage of analogue photographs. This possibility opens up many possibilities in the use and enhancement of existing photos heritage. The research of the original figuration of old buildings, the virtual reconstruction of disappeared architectures and the study of urban development are some of the application areas that exploit the great cultural heritage of photography. Nevertheless there are some restrictions in the use of historical images for automatic reconstruction of buildings such as image quality, availability of camera parameters and ineffective geometry of image acquisition. These constrains are very hard to solve and it is difficult to discover good dataset in the case of terrestrial close range photogrammetry for the above reasons. Even the photographic archives of museums and superintendence, while retaining a wealth of documentation, have no dataset for a dense image matching approach. Compared to the vast collection of historical photos, the class of aerial photos meets both criteria stated above. In this paper historical aerial photographs are used with dense image matching algorithms to realize 3d models of a city in different years. The models can be used to study the urban development of the city and its changes through time. The application relates to the city centre of Verona, for which some time series of aerial photographs have been retrieved. The models obtained in this way allowed, right away, to observe the urban development of the city, the places of expansion and new urban areas. But a more interesting aspect emerged from the analytical comparison between models. The difference, as the Euclidean distance, between two models gives information about new buildings or demolitions. As considering accuracy it is necessary point out that the quality of final observations from model comparison depends on several aspects such as image quality, image scale and marker accuracy from cartography.

  8. Landsat surface reflectance quality assurance extraction (version 1.7)

    USGS Publications Warehouse

    Jones, J.W.; Starbuck, M.J.; Jenkerson, Calli B.

    2013-01-01

    The U.S. Geological Survey (USGS) Land Remote Sensing Program is developing an operational capability to produce Climate Data Records (CDRs) and Essential Climate Variables (ECVs) from the Landsat Archive to support a wide variety of science and resource management activities from regional to global scale. The USGS Earth Resources Observation and Science (EROS) Center is charged with prototyping systems and software to generate these high-level data products. Various USGS Geographic Science Centers are charged with particular ECV algorithm development and (or) selection as well as the evaluation and application demonstration of various USGS CDRs and ECVs. Because it is a foundation for many other ECVs, the first CDR in development is the Landsat Surface Reflectance Product (LSRP). The LSRP incorporates data quality information in a bit-packed structure that is not readily accessible without postprocessing services performed by the user. This document describes two general methods of LSRP quality-data extraction for use in image processing systems. Helpful hints for the installation and use of software originally developed for manipulation of Hierarchical Data Format (HDF) produced through the National Aeronautics and Space Administration (NASA) Earth Observing System are first provided for users who wish to extract quality data into separate HDF files. Next, steps follow to incorporate these extracted data into an image processing system. Finally, an alternative example is illustrated in which the data are extracted within a particular image processing system.

  9. Super-resolved refocusing with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu

    2011-03-01

    This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).

  10. Adaptive enhancement for nonuniform illumination images via nonlinear mapping

    NASA Astrophysics Data System (ADS)

    Wang, Yanfang; Huang, Qian; Hu, Jing

    2017-09-01

    Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.

  11. Analysis on the Effect of Sensor Views in Image Reconstruction Produced by Optical Tomography System Using Charge-Coupled Device.

    PubMed

    Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy

    2018-04-01

    Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.

  12. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metricmore » was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more smoothed appearance than the pre-ASiR Trade-Mark-Sign 100% FBP image. Finally, relative to non-ASiR Trade-Mark-Sign images with 100% of standard dose across the pediatric phantom age spectrum, similar noise levels were obtained in the images at a dose reduction of 48% with 40% ASIR Trade-Mark-Sign and a dose reduction of 82% with 100% ASIR Trade-Mark-Sign . Conclusions: The authors' work was conducted to identify the dose reduction limits of ASiR Trade-Mark-Sign for a pediatric oncology population using automatic tube current modulation. Improvements in noise levels from ASiR Trade-Mark-Sign reconstruction were adapted to provide lower radiation exposure (i.e., lower mA) instead of improved image quality. We have demonstrated for the image quality standards required at our institution, a maximum dose reduction of 82% can be achieved using 100% ASiR Trade-Mark-Sign ; however, to negate changes in the appearance of reconstructed images using ASiR Trade-Mark-Sign with a medium to low frequency noise preserving reconstruction filter (i.e., standard), 40% ASiR Trade-Mark-Sign was implemented in our clinic for 42%-48% dose reduction at all pediatric ages without a visually perceptible change in image quality or image noise.« less

  13. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  14. Sample Preparation for Mass Spectrometry Imaging of Plant Tissues: A Review

    PubMed Central

    Dong, Yonghui; Li, Bin; Malitsky, Sergey; Rogachev, Ilana; Aharoni, Asaph; Kaftan, Filip; Svatoš, Aleš; Franceschi, Pietro

    2016-01-01

    Mass spectrometry imaging (MSI) is a mass spectrometry based molecular ion imaging technique. It provides the means for ascertaining the spatial distribution of a large variety of analytes directly on tissue sample surfaces without any labeling or staining agents. These advantages make it an attractive molecular histology tool in medical, pharmaceutical, and biological research. Likewise, MSI has started gaining popularity in plant sciences; yet, information regarding sample preparation methods for plant tissues is still limited. Sample preparation is a crucial step that is directly associated with the quality and authenticity of the imaging results, it therefore demands in-depth studies based on the characteristics of plant samples. In this review, a sample preparation pipeline is discussed in detail and illustrated through selected practical examples. In particular, special concerns regarding sample preparation for plant imaging are critically evaluated. Finally, the applications of MSI techniques in plants are reviewed according to different classes of plant metabolites. PMID:26904042

  15. Optimizing MRI for imaging peripheral arthritis.

    PubMed

    Hodgson, Richard J; O'Connor, Philip J; Ridgway, John P

    2012-11-01

    MRI is increasingly used for the assessment of both inflammatory arthritis and osteoarthritis. The wide variety of MRI systems in use ranges from low-field, low-cost extremity units to whole-body high-field 7-T systems, each with different strengths for specific applications. The availability of dedicated radiofrequency phased-array coils allows the rapid acquisition of high-resolution images of one or more peripheral joints. MRI is uniquely flexible in its ability to manipulate image contrast, and individual MR sequences may be combined into protocols to sensitively visualize multiple features of arthritis including synovitis, bone marrow lesions, erosions, cartilage changes, and tendinopathy. Careful choice of the imaging parameters allows images to be generated with optimal quality while minimizing unwanted artifacts. Finally, there are many novel MRI techniques that can quantify disease levels in arthritis in tissues including synovitis and cartilage. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. See-through ophthalmoscope for retinal imaging

    NASA Astrophysics Data System (ADS)

    Carpentras, Dino; Moser, Christophe

    2017-05-01

    With the miniaturization of scanning mirrors and the emergence of wearable health monitoring, an intriguing step is to investigate the potential of a laser scanning ophthalmoscope (LSO) for retinal imaging with wearable glasses. In addition to providing morphological information of the retina, such as vasculature, LSO images could also be used to provide information on general health conditions. A compact eyeglass with LSO capability would give access, on demand, to retinal parameters without disturbing the subject's activity. One of the main challenges in this field is the creation of a device that does not interrupt the user's field of view. We report, to our knowledge, the first see-through ophthalmoscope. The system is analyzed with three-dimensional simulations and tested in a proof-of-concept setup with the same key parameters of a wearable device. Finally, image quality is analyzed by acquiring images of an ex-vivo human eye sample.

  17. Chapter 14: Electron Microscopy on Thin Films for Solar Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Manuel; Abou-Ras, Daniel; Nichterwitz, Melanie

    2016-07-22

    This chapter overviews the various techniques applied in scanning electron microscopy (SEM) and transmission electron microscopy (TEM), and highlights their possibilities and also limitations. It gives the various imaging and analysis techniques applied on a scanning electron microscope. The chapter shows that imaging is divided into that making use of secondary electrons (SEs) and of backscattered electrons (BSEs), resulting in different contrasts in the images and thus providing information on compositions, microstructures, and surface potentials. Whenever aiming for imaging and analyses at scales of down to the angstroms range, TEM and its related techniques are appropriate tools. In many cases,more » also SEM techniques provide the access to various material properties of the individual layers, not requiring specimen preparation as time consuming as TEM techniques. Finally, the chapter dedicates to cross-sectional specimen preparation for electron microscopy. The preparation decides indeed on the quality of imaging and analyses.« less

  18. Building conservation base on assessment of facade quality on Basuki Rachmat Street, Malang

    NASA Astrophysics Data System (ADS)

    Kurniawan, E. B.; Putri, R. Y. A.; Wardhani, D. K.

    2017-06-01

    Visual quality covers aspects of imageability which is associated with visual system and the element of distinction. Within a visual system of specific area, the physical quality may lead to a strong image. Here, the physical quality is one of important that make urban aesthetic. Build a discussion toward visual system of urban area, this paper aim is to identify the influencing factors in defining the façade’s visual quality of heritage buildings at Jend. Basuki Rahmat Street, Malang City, East Java-Indonesia. This Street is a main road of Malang city center that was built by Dutch colonial government. It was designed by IR. Thomas Kartsten. It was known as one of Malang area that have good visual quality. In order to idenfity the influencing factors, this paper conducts Multiple linear regression as a tools of analysis. The examined potential factors are resulted from of architecture and urban design expert’s assessment to each building’s segment at Jend. Basuki Rahmat. Finally, this paper reveals that the influencing factors are color, rhythm, and proportion. This is demonstrated by the results model: Visual quality (Y) = 0.304 + 0.21 Colors(X5) + 0.221 rhythm (X6) + 0.304 proportion (X7). Furthermore, the recommendation of the building facade will be made based on this model and study of historical and typology building in Basuki Rachmat Street.

  19. Synthesis and quality control of fluorodeoxyglucose and performance assessment of Siemens MicroFocus 220 small animal PET scanner

    NASA Astrophysics Data System (ADS)

    Phaterpekar, Siddhesh Nitin

    The scope of this article is to cover the synthesis and quality control procedures involved in production of Fludeoxyglucose (18F--FDG). The article also describes the cyclotron production of 18F radioisotope and gives a brief overview on operations and working of a fixed energy medical cyclotron. The quality control procedures for FDG involve radiochemical and radionuclidic purity tests, pH tests, chemical purity tests, sterility tests, endotoxin tests. Each of these procedures were carried out for multiple batches of FDG with a passing rate of 95% among 20 batches. The article also covers the quality assurance steps for the Siemens MicroPET Focus 220 Scanner using a Jaszczak phantom. We have carried out spatial resolution tests on the scanner, with an average transaxial resolution of 1.775mm with 2-3mm offset. Tests involved detector efficiency, blank scan sinograms and transmission sinograms. A series of radioactivity distribution tests are also carried out on a uniform phantom, denoting the variations in radioactivity and uniformity by using cylindrical ROIs in the transverse region of the final image. The purpose of these quality control tests is to make sure the manufactured FDG is biocompatible with the human body. Quality assurance tests are carried on PET scanners for efficient performance, and to make sure the quality of images acquired is according to the radioactivity distribution in the subject of interest.

  20. Construction of mammographic examination process ontology using bottom-up hierarchical task analysis.

    PubMed

    Yagahara, Ayako; Yokooka, Yuki; Jiang, Guoqian; Tsuji, Shintarou; Fukuda, Akihisa; Nishimoto, Naoki; Kurowarabi, Kunio; Ogasawara, Katsuhiko

    2018-03-01

    Describing complex mammography examination processes is important for improving the quality of mammograms. It is often difficult for experienced radiologic technologists to explain the process because their techniques depend on their experience and intuition. In our previous study, we analyzed the process using a new bottom-up hierarchical task analysis and identified key components of the process. Leveraging the results of the previous study, the purpose of this study was to construct a mammographic examination process ontology to formally describe the relationships between the process and image evaluation criteria to improve the quality of mammograms. First, we identified and created root classes: task, plan, and clinical image evaluation (CIE). Second, we described an "is-a" relation referring to the result of the previous study and the structure of the CIE. Third, the procedural steps in the ontology were described using the new properties: "isPerformedBefore," "isPerformedAfter," and "isPerformedAfterIfNecessary." Finally, the relationships between tasks and CIEs were described using the "isAffectedBy" property to represent the influence of the process on image quality. In total, there were 219 classes in the ontology. By introducing new properties related to the process flow, a sophisticated mammography examination process could be visualized. In relationships between tasks and CIEs, it became clear that the tasks affecting the evaluation criteria related to positioning were greater in number than those for image quality. We developed a mammographic examination process ontology that makes knowledge explicit for a comprehensive mammography process. Our research will support education and help promote knowledge sharing about mammography examination expertise.

  1. Characteristics of knowledge content in a curated online evidence library.

    PubMed

    Varada, Sowmya; Lacson, Ronilda; Raja, Ali S; Ip, Ivan K; Schneider, Louise; Osterbur, David; Bain, Paul; Vetrano, Nicole; Cellini, Jacqueline; Mita, Carol; Coletti, Margaret; Whelan, Julia; Khorasani, Ramin

    2018-05-01

    To describe types of recommendations represented in a curated online evidence library, report on the quality of evidence-based recommendations pertaining to diagnostic imaging exams, and assess underlying knowledge representation. The evidence library is populated with clinical decision rules, professional society guidelines, and locally developed best practice guidelines. Individual recommendations were graded based on a standard methodology and compared using chi-square test. Strength of evidence ranged from grade 1 (systematic review) through grade 5 (recommendations based on expert opinion). Finally, variations in the underlying representation of these recommendations were identified. The library contains 546 individual imaging-related recommendations. Only 15% (16/106) of recommendations from clinical decision rules were grade 5 vs 83% (526/636) from professional society practice guidelines and local best practice guidelines that cited grade 5 studies (P < .0001). Minor head trauma, pulmonary embolism, and appendicitis were topic areas supported by the highest quality of evidence. Three main variations in underlying representations of recommendations were "single-decision," "branching," and "score-based." Most recommendations were grade 5, largely because studies to test and validate many recommendations were absent. Recommendation types vary in amount and complexity and, accordingly, the structure and syntax of statements they generate. However, they can be represented in single-decision, branching, and score-based representations. In a curated evidence library with graded imaging-based recommendations, evidence quality varied widely, with decision rules providing the highest-quality recommendations. The library may be helpful in highlighting evidence gaps, comparing recommendations from varied sources on similar clinical topics, and prioritizing imaging recommendations to inform clinical decision support implementation.

  2. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  3. Finger vein verification system based on sparse representation.

    PubMed

    Xin, Yang; Liu, Zhi; Zhang, Haixia; Zhang, Hong

    2012-09-01

    Finger vein verification is a promising biometric pattern for personal identification in terms of security and convenience. The recognition performance of this technology heavily relies on the quality of finger vein images and on the recognition algorithm. To achieve efficient recognition performance, a special finger vein imaging device is developed, and a finger vein recognition method based on sparse representation is proposed. The motivation for the proposed method is that finger vein images exhibit a sparse property. In the proposed system, the regions of interest (ROIs) in the finger vein images are segmented and enhanced. Sparse representation and sparsity preserving projection on ROIs are performed to obtain the features. Finally, the features are measured for recognition. An equal error rate of 0.017% was achieved based on the finger vein image database, which contains images that were captured by using the near-IR imaging device that was developed in this study. The experimental results demonstrate that the proposed method is faster and more robust than previous methods.

  4. Quantitative analysis and temperature-induced variations of moiré pattern in fiber-coupled imaging sensors.

    PubMed

    Karbasi, Salman; Arianpour, Ashkan; Motamedi, Nojan; Mellette, William M; Ford, Joseph E

    2015-06-10

    Imaging fiber bundles can map the curved image surface formed by some high-performance lenses onto flat focal plane detectors. The relative alignment between the focal plane array pixels and the quasi-periodic fiber-bundle cores can impose an undesirable space variant moiré pattern, but this effect may be greatly reduced by flat-field calibration, provided that the local responsivity is known. Here we demonstrate a stable metric for spatial analysis of the moiré pattern strength, and use it to quantify the effect of relative sensor and fiber-bundle pitch, and that of the Bayer color filter. We measure the thermal dependence of the moiré pattern, and the achievable improvement by flat-field calibration at different operating temperatures. We show that a flat-field calibration image at a desired operating temperature can be generated using linear interpolation between white images at several fixed temperatures, comparing the final image quality with an experimentally acquired image at the same temperature.

  5. INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Larkman, David J.; Nunes, Rita G.

    2007-04-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.

  6. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  7. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  8. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao

    2015-12-01

    The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.

  9. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The meaning of body experience evaluation in oncology.

    PubMed

    Slatman, Jenny

    2011-12-01

    Evaluation of quality of life, psychic and bodily well-being is becoming increasingly important in oncology aftercare. This type of assessment is mainly carried out by medical psychologists. In this paper I will seek to show that body experience valuation has, besides its psychological usefulness, a normative and practical dimension. Body experience evaluation aims at establishing the way a person experiences and appreciates his or her physical appearance, intactness and competence. This valuation constitutes one's 'body image'. While, first, interpreting the meaning of body image and, second, indicating the limitations of current psychological body image assessment, I argue that the normative aspect of body image is related to the experience of bodily wholeness or bodily integrity. Since this experience is contextualized by a person's life story, evaluation should also focus on narrative aspects. I finally suggest that the interpretation of body experience is not only valuable to assess a person's quality of life after treatment, but that it can also be useful in counseling prior to interventions, since it can support patients in making decisions about interventions that will change their bodies. To apply this type of evaluation to oncology practice, a rich and tailored vocabulary of body experiences has to be developed.

  11. A Freehand Ultrasound Elastography System with Tracking for In-vivo Applications

    PubMed Central

    Foroughi, Pezhman; Kang, Hyun-Jae; Carnegie, Daniel A.; van Vledder, Mark G.; Choti, Michael A.; Hager, Gregory D.; Boctor, Emad M.

    2012-01-01

    Ultrasound transducers are commonly tracked in modern ultrasound navigation/guidance systems. In this paper, we demonstrate the advantages of incorporating tracking information into ultrasound elastography for clinical applications. First, we address a common limitation of freehand palpation: speckle decorrelation due to out-of-plane probe motion. We show that by automatically selecting pairs of radio frequency (RF) frames with minimal lateral and out-of-plane motions combined with a fast and robust displacement estimation technique greatly improves in-vivo elastography results. We also use tracking information and image quality measure to fuse multiple images with similar strain that are taken roughly from the same location to obtain a high quality elastography image. Finally, we show that tracking information can be used to give the user partial control over the rate of compression. Our methods are tested on tissue mimicking phantom and experiments have been conducted on intra-operative data acquired during animal and human experiments involving liver ablation. Our results suggest that in challenging clinical conditions, our proposed method produces reliable strain images and eliminates the need for a manual search through the ultrasound data in order to find RF pairs suitable for elastography. PMID:23257351

  12. PET motion correction in context of integrated PET/MR: Current techniques, limitations, and future projections.

    PubMed

    Gillman, Ashley; Smith, Jye; Thomas, Paul; Rose, Stephen; Dowson, Nicholas

    2017-12-01

    Patient motion is an important consideration in modern PET image reconstruction. Advances in PET technology mean motion has an increasingly important influence on resulting image quality. Motion-induced artifacts can have adverse effects on clinical outcomes, including missed diagnoses and oversized radiotherapy treatment volumes. This review aims to summarize the wide variety of motion correction techniques available in PET and combined PET/CT and PET/MR, with a focus on the latter. A general framework for the motion correction of PET images is presented, consisting of acquisition, modeling, and correction stages. Methods for measuring, modeling, and correcting motion and associated artifacts, both in literature and commercially available, are presented, and their relative merits are contrasted. Identified limitations of current methods include modeling of aperiodic and/or unpredictable motion, attaining adequate temporal resolution for motion correction in dynamic kinetic modeling acquisitions, and maintaining availability of the MR in PET/MR scans for diagnostic acquisitions. Finally, avenues for future investigation are discussed, with a focus on improvements that could improve PET image quality, and that are practical in the clinical environment. © 2017 American Association of Physicists in Medicine.

  13. Efficient Text Encryption and Hiding with Double-Random Phase-Encoding

    PubMed Central

    Sang, Jun; Ling, Shenggui; Alam, Mohammad S.

    2012-01-01

    In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level. PMID:23202003

  14. Single-exposure color digital holography

    NASA Astrophysics Data System (ADS)

    Feng, Shaotong; Wang, Yanhui; Zhu, Zhuqing; Nie, Shouping

    2010-11-01

    In this paper, we report a method for color image reconstruction by recording only one single multi-wavelength hologram. In the recording process, three lasers of different wavelengths emitting in the red, green and blue regions are used for illuminating on the object and the object diffraction fields will arrive at the hologram plane simultaneously. Three reference beams with different spatial angles will interfere with the corresponding object diffraction fields on the hologram plane, respectively. Finally, a series of sub-holograms incoherently overlapped on the CCD to be recorded as a multi-wavelength hologram. Angular division multiplexing is employed to reference beams so that the spatial spectra of the multiple recordings will be separated in the Fourier plane. In the reconstruction process, the multi-wavelength hologram will be Fourier transformed into its Fourier plane, where the spatial spectra of different wavelengths are separated and can be easily extracted by employing frequency filtering. The extracted spectra are used to reconstruct the corresponding monochromatic complex amplitudes, which will be synthesized to reconstruct the color image. For singleexposure recording technique, it is convenient for applications on the real-time image processing fields. However, the quality of the reconstructed images is affected by speckle noise. How to improve the quality of the images needs for further research.

  15. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  16. A Genetic Algorithm for the Generation of Packetization Masks for Robust Image Communication

    PubMed Central

    Zapata-Quiñones, Katherine; Duran-Faundez, Cristian; Gutiérrez, Gilberto; Lecuire, Vincent; Arredondo-Flores, Christopher; Jara-Lipán, Hugo

    2017-01-01

    Image interleaving has proven to be an effective solution to provide the robustness of image communication systems when resource limitations make reliable protocols unsuitable (e.g., in wireless camera sensor networks); however, the search for optimal interleaving patterns is scarcely tackled in the literature. In 2008, Rombaut et al. presented an interesting approach introducing a packetization mask generator based in Simulated Annealing (SA), including a cost function, which allows assessing the suitability of a packetization pattern, avoiding extensive simulations. In this work, we present a complementary study about the non-trivial problem of generating optimal packetization patterns. We propose a genetic algorithm, as an alternative to the cited work, adopting the mentioned cost function, then comparing it to the SA approach and a torus automorphism interleaver. In addition, we engage the validation of the cost function and provide results attempting to conclude about its implication in the quality of reconstructed images. Several scenarios based on visual sensor networks applications were tested in a computer application. Results in terms of the selected cost function and image quality metric PSNR show that our algorithm presents similar results to the other approaches. Finally, we discuss the obtained results and comment about open research challenges. PMID:28452934

  17. Development of a Human Brain Diffusion Tensor Template

    PubMed Central

    Peng, Huiling; Orlichenko, Anton; Dawe, Robert J.; Agam, Gady; Zhang, Shengwei; Arfanakis, Konstantinos

    2009-01-01

    The development of a brain template for diffusion tensor imaging (DTI) is crucial for comparisons of neuronal structural integrity and brain connectivity across populations, as well as for the development of a white matter atlas. Previous efforts to produce a DTI brain template have been compromised by factors related to image quality, the effectiveness of the image registration approach, the appropriateness of subject inclusion criteria, the completeness and accuracy of the information summarized in the final template. The purpose of this work was to develop a DTI human brain template using techniques that address the shortcomings of previous efforts. Therefore, data containing minimal artifacts were first obtained on 67 healthy human subjects selected from an age-group with relatively similar diffusion characteristics (20–40 years of age), using an appropriate DTI acquisition protocol. Non-linear image registration based on mean diffusion-weighted and fractional anisotropy images was employed. DTI brain templates containing median and mean tensors were produced in ICBM-152 space and made publicly available. The resulting set of DTI templates is characterized by higher image sharpness, provides the ability to distinguish smaller white matter fiber structures, contains fewer image artifacts, than previously developed templates, and to our knowledge, is one of only two templates produced based on a relatively large number of subjects. Furthermore, median tensors were shown to better preserve the diffusion characteristics at the group level than mean tensors. Finally, white matter fiber tractography was applied on the template and several fiber-bundles were traced. PMID:19341801

  18. Development of a human brain diffusion tensor template.

    PubMed

    Peng, Huiling; Orlichenko, Anton; Dawe, Robert J; Agam, Gady; Zhang, Shengwei; Arfanakis, Konstantinos

    2009-07-15

    The development of a brain template for diffusion tensor imaging (DTI) is crucial for comparisons of neuronal structural integrity and brain connectivity across populations, as well as for the development of a white matter atlas. Previous efforts to produce a DTI brain template have been compromised by factors related to image quality, the effectiveness of the image registration approach, the appropriateness of subject inclusion criteria, and the completeness and accuracy of the information summarized in the final template. The purpose of this work was to develop a DTI human brain template using techniques that address the shortcomings of previous efforts. Therefore, data containing minimal artifacts were first obtained on 67 healthy human subjects selected from an age-group with relatively similar diffusion characteristics (20-40 years of age), using an appropriate DTI acquisition protocol. Non-linear image registration based on mean diffusion-weighted and fractional anisotropy images was employed. DTI brain templates containing median and mean tensors were produced in ICBM-152 space and made publicly available. The resulting set of DTI templates is characterized by higher image sharpness, provides the ability to distinguish smaller white matter fiber structures, contains fewer image artifacts, than previously developed templates, and to our knowledge, is one of only two templates produced based on a relatively large number of subjects. Furthermore, median tensors were shown to better preserve the diffusion characteristics at the group level than mean tensors. Finally, white matter fiber tractography was applied on the template and several fiber-bundles were traced.

  19. Opto-mechanical design and gravity-deformation analysis on optical telescope in laser communication system

    NASA Astrophysics Data System (ADS)

    Fu, Sen; Du, Jindan; Song, Yiwei; Gao, Tianyu; Zhang, Daqing; Wang, Yongzhi

    2017-11-01

    In space laser communication, optical antennas are one of the main components and the precision of optical antennas is very high. In this paper, it is based on the R-C telescope and it is carried out that the design and simulation of optical lens and supporting truss, according to the parameters of the systems. And a finite element method (FEM) was used to analyze the deformation of the optical lens. Finally, the Zernike polynomial was introduced to fit the primary mirror with a diameter of 250mm. The objective of this study is to determine whether the wave-front aberration of the primary mirror can meet the imaging quality. The results show that the deterioration of the imaging quality caused by the gravity deformation of primary and secondary mirrors. At the same time, the optical deviation of optical antenna increase with the diameter of the pupil.

  20. A mask quality control tool for the OSIRIS multi-object spectrograph

    NASA Astrophysics Data System (ADS)

    López-Ruiz, J. C.; Vaz Cedillo, Jacinto Javier; Ederoclite, Alessandro; Bongiovanni, Ángel; González Escalera, Víctor

    2012-09-01

    OSIRIS multi object spectrograph uses a set of user-customised-masks, which are manufactured on-demand. The manufacturing process consists of drilling the specified slits on the mask with the required accuracy. Ensuring that slits are on the right place when observing is of vital importance. We present a tool for checking the quality of the process of manufacturing the masks which is based on analyzing the instrument images obtained with the manufactured masks on place. The tool extracts the slit information from these images, relates specifications with the extracted slit information, and finally communicates to the operator if the manufactured mask fulfills the expectations of the mask designer. The proposed tool has been built using scripting languages and using standard libraries such as opencv, pyraf and scipy. The software architecture, advantages and limits of this tool in the lifecycle of a multiobject acquisition are presented.

  1. Branding a college of pharmacy.

    PubMed

    Rupp, Michael T

    2012-11-12

    In a possible future of supply-demand imbalance in pharmacy education, a brand that positively differentiates a college or school of pharmacy from its competitors may be the key to its survival. The nominal group technique, a structured group problem-solving and decision-making process, was used during a faculty retreat to identify and agree on the core qualities that define the brand image of Midwestern University's College of Pharmacy in Glendale, AZ. Results from the retreat were provided to the faculty and students, who then proposed 168 mottos that embodied these qualities. Mottos were voted on by faculty members and pharmacy students. The highest ranked 24 choices were submitted to the faculty, who then selected the top 10 finalists. A final vote by students was used to select the winning motto. The methods described here may be useful to other colleges and schools of pharmacy that want to better define their own brand image and strengthen their organizational culture.

  2. Automatic multiresolution age-related macular degeneration detection from fundus images

    NASA Astrophysics Data System (ADS)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  3. Ongoing quality control in digital radiography: Report of AAPM Imaging Physics Committee Task Group 151.

    PubMed

    Jones, A Kyle; Heintz, Philip; Geiser, William; Goldman, Lee; Jerjian, Khachig; Martin, Melissa; Peck, Donald; Pfeiffer, Douglas; Ranger, Nicole; Yorkston, John

    2015-11-01

    Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist is responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.

  4. Large Scale Textured Mesh Reconstruction from Mobile Mapping Images and LIDAR Scans

    NASA Astrophysics Data System (ADS)

    Boussaha, M.; Vallet, B.; Rives, P.

    2018-05-01

    The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.

  5. Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera

    NASA Astrophysics Data System (ADS)

    Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund

    2016-03-01

    We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.

  6. Ongoing quality control in digital radiography: Report of AAPM Imaging Physics Committee Task Group 151

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Geiser, William; Heintz, Philip

    Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist ismore » responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.« less

  7. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  8. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    NASA Astrophysics Data System (ADS)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  9. Image Fusion of CT and MR with Sparse Representation in NSST Domain

    PubMed Central

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134

  10. Image Fusion of CT and MR with Sparse Representation in NSST Domain.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

  11. Embedding intensity image into a binary hologram with strong noise resistant capability

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-11-01

    A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.

  12. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  13. WE-B-BRC-03: Risk in the Context of Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samei, E.

    Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less

  14. Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy

    NASA Astrophysics Data System (ADS)

    Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.

    2017-10-01

    Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.

  15. An image-guided tool to prevent hospital acquired infections

    NASA Astrophysics Data System (ADS)

    Nagy, Melinda; Szilágyi, László; Lehotsky, Ákos; Haidegger, Tamás; Benyó, Balázs

    2011-03-01

    Hospital Acquired Infections (HAI) represent the fourth leading cause of death in the United States, and claims hundreds of thousands of lives annually in the rest of the world. This paper presents a novel low-cost mobile device|called Stery-Hand|that helps to avoid HAI by improving hand hygiene control through providing an objective evaluation of the quality of hand washing. The use of the system is intuitive: having performed hand washing with a soap mixed with UV re ective powder, the skin appears brighter in UV illumination on the disinfected surfaces. Washed hands are inserted into the Stery-Hand box, where a digital image is taken under UV lighting. Automated image processing algorithms are employed in three steps to evaluate the quality of hand washing. First, the contour of the hand is extracted in order to distinguish the hand from the background. Next, a semi-supervised clustering algorithm classies the pixels of the hand into three groups, corresponding to clean, partially clean and dirty areas. The clustering algorithm is derived from the histogram-based quick fuzzy c-means approach, using a priori information extracted from reference images, evaluated by experts. Finally, the identied areas are adjusted to suppress shading eects, and quantied in order to give a verdict on hand disinfection quality. The proposed methodology was validated through tests using hundreds of images recorded in our laboratory. The proposed system was found robust and accurate, producing correct estimation for over 98% of the test cases. Stery-Hand may be employed in general practice, and it may also serve educational purposes.

  16. Preparation of scanning tunneling microscopy tips using pulsed alternating current etching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valencia, Victor A.; Thaker, Avesh A.; Derouin, Jonathan

    An electrochemical method using pulsed alternating current etching (PACE) to produce atomically sharp scanning tunneling microscopy (STM) tips is presented. An Arduino Uno microcontroller was used to control the number and duration of the alternating current (AC) pulses, allowing for ready optimization of the procedures for both Pt:Ir and W tips using a single apparatus. W tips prepared using constant and pulsed AC power were compared. Tips fashioned using PACE were sharper than those etched with continuous AC power alone. Pt:Ir tips were prepared with an initial coarse etching stage using continuous AC power followed by fine etching using PACE.more » The number and potential of the finishing AC pulses was varied and scanning electron microscope imaging was used to compare the results. Finally, tip quality using the optimized procedures was verified by UHV-STM imaging. With PACE, at least 70% of the W tips and 80% of the Pt:Ir tips were of sufficiently high quality to obtain atomically resolved images of HOPG or Ni(111)« less

  17. Automated optical inspection of liquid crystal display anisotropic conductive film bonding

    NASA Astrophysics Data System (ADS)

    Ni, Guangming; Du, Xiaohui; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong

    2016-10-01

    Anisotropic conductive film (ACF) bonding is widely used in the liquid crystal display (LCD) industry. It implements circuit connection between screens and flexible printed circuits or integrated circuits. Conductive microspheres in ACF are key factors that influence LCD quality, because the conductive microspheres' quantity and shape deformation rate affect the interconnection resistance. Although this issue has been studied extensively by prior work, quick and accurate methods to inspect the quality of ACF bonding are still missing in the actual production process. We propose a method to inspect ACF bonding effectively by using automated optical inspection. The method has three steps. The first step is that it acquires images of the detection zones using a differential interference contrast (DIC) imaging system. The second step is that it identifies the conductive microspheres and their shape deformation rate using quantitative analysis of the characteristics of the DIC images. The final step is that it inspects ACF bonding using a back propagation trained neural network. The result shows that the miss rate is lower than 0.1%, and the false inspection rate is lower than 0.05%.

  18. New procedures to evaluate visually lossless compression for display systems

    NASA Astrophysics Data System (ADS)

    Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim

    2017-09-01

    Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.

  19. The Impact of Quality Assurance Assessment on Diffusion Tensor Imaging Outcomes in a Large-Scale Population-Based Cohort

    PubMed Central

    Roalf, David R.; Quarmley, Megan; Elliott, Mark A.; Satterthwaite, Theodore D.; Vandekar, Simon N.; Ruparel, Kosha; Gennatas, Efstathios D.; Calkins, Monica E.; Moore, Tyler M.; Hopson, Ryan; Prabhakaran, Karthik; Jackson, Chad T.; Verma, Ragini; Hakonarson, Hakon; Gur, Ruben C.; Gur, Raquel E.

    2015-01-01

    Background Diffusion tensor imaging (DTI) is applied in investigation of brain biomarkers for neurodevelopmental and neurodegenerative disorders. However, the quality of DTI measurements, like other neuroimaging techniques, is susceptible to several confounding factors (e.g. motion, eddy currents), which have only recently come under scrutiny. These confounds are especially relevant in adolescent samples where data quality may be compromised in ways that confound interpretation of maturation parameters. The current study aims to leverage DTI data from the Philadelphia Neurodevelopmental Cohort (PNC), a sample of 1,601 youths ages of 8–21 who underwent neuroimaging, to: 1) establish quality assurance (QA) metrics for the automatic identification of poor DTI image quality; 2) examine the performance of these QA measures in an external validation sample; 3) document the influence of data quality on developmental patterns of typical DTI metrics. Methods All diffusion-weighted images were acquired on the same scanner. Visual QA was performed on all subjects completing DTI; images were manually categorized as Poor, Good, or Excellent. Four image quality metrics were automatically computed and used to predict manual QA status: Mean voxel intensity outlier count (MEANVOX), Maximum voxel intensity outlier count (MAXVOX), mean relative motion (MOTION) and temporal signal-to-noise ratio (TSNR). Classification accuracy for each metric was calculated as the area under the receiver-operating characteristic curve (AUC). A threshold was generated for each measure that best differentiated visual QA status and applied in a validation sample. The effects of data quality on sensitivity to expected age effects in this developmental sample were then investigated using the traditional MRI diffusion metrics: fractional anisotropy (FA) and mean diffusivity (MD). Finally, our method of QA is compared to DTIPrep. Results TSNR (AUC=0.94) best differentiated Poor data from Good and Excellent data. MAXVOX (AUC=0.88) best differentiated Good from Excellent DTI data. At the optimal threshold, 88% of Poor data and 91% Good/Excellent data were correctly identified. Use of these thresholds on a validation dataset (n=374) indicated high accuracy. In the validation sample 83% of Poor data and 94% of Excellent data was identified using thresholds derived from the training sample. Both FA and MD were affected by the inclusion of poor data in an analysis of age, sex and race in a matched comparison sample. In addition, we show that the inclusion of poor data results in significant attenuation of the correlation between diffusion metrics (FA and MD) and age during a critical neurodevelopmental period. We find higher correspondence between our QA method and DTIPrep for Poor data, but we find our method to be more robust for apparently high-quality images. Conclusion Automated QA of DTI can facilitate large-scale, high-throughput quality assurance by reliably identifying both scanner and subject induced imaging artifacts. The results present a practical example of the confounding effects of artifacts on DTI analysis in a large population-based sample, and suggest that estimates of data quality should not only be reported but also accounted for in data analysis, especially in studies of development. PMID:26520775

  20. Ultrahigh resolution retinal imaging by visible light OCT with longitudinal achromatization

    PubMed Central

    Chong, Shau Poh; Zhang, Tingwei; Kho, Aaron; Bernucci, Marcel T.; Dubra, Alfredo; Srinivasan, Vivek J.

    2018-01-01

    Chromatic aberrations are an important design consideration in high resolution, high bandwidth, refractive imaging systems that use visible light. Here, we present a fiber-based spectral/Fourier domain, visible light OCT ophthalmoscope corrected for the average longitudinal chromatic aberration (LCA) of the human eye. Analysis of complex speckles from in vivo retinal images showed that achromatization resulted in a speckle autocorrelation function that was ~20% narrower in the axial direction, but unchanged in the transverse direction. In images from the improved, achromatized system, the separation between Bruch’s membrane (BM), the retinal pigment epithelium (RPE), and the outer segment tips clearly emerged across the entire 6.5 mm field-of-view, enabling segmentation and morphometry of BM and the RPE in a human subject. Finally, cross-sectional images depicted distinct inner retinal layers with high resolution. Thus, with chromatic aberration compensation, visible light OCT can achieve volume resolutions and retinal image quality that matches or exceeds ultrahigh resolution near-infrared OCT systems with no monochromatic aberration compensation. PMID:29675296

  1. Low-illumination image denoising method for wide-area search of nighttime sea surface

    NASA Astrophysics Data System (ADS)

    Song, Ming-zhu; Qu, Hong-song; Zhang, Gui-xiang; Tao, Shu-ping; Jin, Guang

    2018-05-01

    In order to suppress complex mixing noise in low-illumination images for wide-area search of nighttime sea surface, a model based on total variation (TV) and split Bregman is proposed in this paper. A fidelity term based on L1 norm and a fidelity term based on L2 norm are designed considering the difference between various noise types, and the regularization mixed first-order TV and second-order TV are designed to balance the influence of details information such as texture and edge for sea surface image. The final detection result is obtained by using the high-frequency component solved from L1 norm and the low-frequency component solved from L2 norm through wavelet transform. The experimental results show that the proposed denoising model has perfect denoising performance for artificially degraded and low-illumination images, and the result of image quality assessment index for the denoising image is superior to that of the contrastive models.

  2. Ultrasonography in diagnosing chronic pancreatitis: New aspects

    PubMed Central

    Dimcevski, Georg; Erchinger, Friedemann G; Havre, Roald; Gilja, Odd Helge

    2013-01-01

    The course and outcome is poor for most patients with pancreatic diseases. Advances in pancreatic imaging are important in the detection of pancreatic diseases at early stages. Ultrasonography as a diagnostic tool has made, virtually speaking a technical revolution in medical imaging in the new millennium. It has not only become the preferred method for first line imaging, but also, increasingly to clarify the interpretation of other imaging modalities to obtain efficient clinical decision. We review ultrasonography modalities, focusing on advanced pancreatic imaging and its potential to substantially improve diagnosis of pancreatic diseases at earlier stages. In the first section, we describe scanning techniques and examination protocols. Their consequences for image quality and the ability to obtain complete and detailed visualization of the pancreas are discussed. In the second section we outline ultrasonographic characteristics of pancreatic diseases with emphasis on chronic pancreatitis. Finally, new developments in ultrasonography of the pancreas such as contrast enhanced ultrasound and elastography are enlightened. PMID:24259955

  3. The ship edge feature detection based on high and low threshold for remote sensing image

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Shengyang

    2018-05-01

    In this paper, a method based on high and low threshold is proposed to detect the ship edge feature due to the low accuracy rate caused by the noise. Analyze the relationship between human vision system and the target features, and to determine the ship target by detecting the edge feature. Firstly, using the second-order differential method to enhance the quality of image; Secondly, to improvement the edge operator, we introduction of high and low threshold contrast to enhancement image edge and non-edge points, and the edge as the foreground image, non-edge as a background image using image segmentation to achieve edge detection, and remove the false edges; Finally, the edge features are described based on the result of edge features detection, and determine the ship target. The experimental results show that the proposed method can effectively reduce the number of false edges in edge detection, and has the high accuracy of remote sensing ship edge detection.

  4. New generation ICG-based contrast agents for ultrasound-switchable fluorescence imaging

    PubMed Central

    Yu, Shuai; Cheng, Bingbing; Yao, Tingfeng; Xu, Cancan; Nguyen, Kytai T.; Hong, Yi; Yuan, Baohong

    2016-01-01

    Recently, we developed a new technology, ultrasound-switchable fluorescence (USF), for high-resolution imaging in centimeter-deep tissues via fluorescence contrast. The success of USF imaging highly relies on excellent contrast agents. ICG-encapsulated poly(N-isopropylacrylamide) nanoparticles (ICG-NPs) are one of the families of the most successful near-infrared (NIR) USF contrast agents. However, the first-generation ICG-NPs have a short shelf life (<1 month). This work significantly increases the shelf life of the new-generation ICG-NPs (>6 months). In addition, we have conjugated hydroxyl or carboxyl function groups on the ICG-NPs for future molecular targeting. Finally, we have demonstrated the effect of temperature-switching threshold (Tth) and the background temperature (TBG) on the quality of USF images. We estimated that the Tth of the ICG-NPs should be controlled at ~38–40 °C (slightly above the body temperature of 37 °C) for future in vivo USF imaging. Addressing these challenges further reduces the application barriers of USF imaging. PMID:27775014

  5. High-quality 3D correction of ring and radiant artifacts in flat panel detector-based cone beam volume CT imaging

    NASA Astrophysics Data System (ADS)

    Abu Anas, Emran Mohammad; Kim, Jae Gon; Lee, Soo Yeol; Kamrul Hasan, Md

    2011-10-01

    The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.

  6. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  7. Imaging-based logics for ornamental stone quality chart definition

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Gargiulo, Aldo; Serranti, Silvia; Raspi, Costantino

    2007-02-01

    Ornamental stone products are commercially classified on the market according to several factors related both to intrinsic lythologic characteristics and to their visible pictorial attributes. Sometimes these latter aspects prevail in quality criteria definition and assessment. Pictorial attributes are in any case also influenced by the performed working actions and the utilized tools selected to realize the final stone manufactured product. Stone surface finishing is a critical task because it can contribute to enhance certain aesthetic features of the stone itself. The study was addressed to develop an innovative set of methodologies and techniques able to quantify the aesthetic quality level of stone products taking into account both the physical and the aesthetical characteristics of the stones. In particular, the degree of polishing of the stone surfaces and the presence of defects have been evaluated, applying digital image processing strategies. Morphological and color parameters have been extracted developing specific software architectures. Results showed as the proposed approaches allow to quantify the degree of polishing and to identify surface defects related to the intrinsic characteristics of the stone and/or the performed working actions.

  8. Improving Vintage Seismic Data Quality through Implementation of Advance Processing Techniques

    NASA Astrophysics Data System (ADS)

    Latiff, A. H. Abdul; Boon Hong, P. G.; Jamaludin, S. N. F.

    2017-10-01

    It is essential in petroleum exploration to have high resolution subsurface images, both vertically and horizontally, in uncovering new geological and geophysical aspects of our subsurface. The lack of success may have been from the poor imaging quality which led to inaccurate analysis and interpretation. In this work, we re-processed the existing seismic dataset with an emphasis on two objectives. Firstly, to produce a better 3D seismic data quality with full retention of relative amplitudes and significantly reduce seismic and structural uncertainty. Secondly, to facilitate further prospect delineation through enhanced data resolution, fault definitions and events continuity, particularly in syn-rift section and basement cover contacts and in turn, better understand the geology of the subsurface especially in regard to the distribution of the fluvial and channel sands. By adding recent, state-of-the-art broadband processing techniques such as source and receiver de-ghosting, high density velocity analysis and shallow water de-multiple, the final results produced a better overall reflection detail and frequency in specific target zones, particularly in the deeper section.

  9. Adaptive color demosaicing and false color removal

    NASA Astrophysics Data System (ADS)

    Guarnera, Mirko; Messina, Giuseppe; Tomaselli, Valeria

    2010-04-01

    Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.

  10. First results from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)

    NASA Technical Reports Server (NTRS)

    Vane, Gregg

    1987-01-01

    After engineering flights aboard the NASA U-2 research aircraft in the winter of 1986 to 1987 and spring of 1987, extensive data collection across the United States was begun with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) in the summer of 1987 in support of a NASA data evaluation and technology assessment program. This paper presents some of the first results obtained from AVIRIS. Examples of spectral imagery acquired over Mountain View and Mono Lake, California, and the Cuprite Mining District in western Nevada are presented. Sensor performance and data quality are described, and in the final section of this paper, plans for the future are discussed.

  11. TESS Data Processing and Quick-look Pipeline

    NASA Astrophysics Data System (ADS)

    Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office

    2018-01-01

    We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.

  12. Characterization of fiber diameter using image analysis

    NASA Astrophysics Data System (ADS)

    Baheti, S.; Tunak, M.

    2017-10-01

    Due to high surface area and porosity, the applications of nanofibers have increased in recent years. In the production process, determination of average fiber diameter and fiber orientation is crucial for quality assessment. The objective of present study was to compare the relative performance of different methods discussed in literature for estimation of fiber diameter. In this work, the existing automated fiber diameter analysis software packages available in literature were developed and validated based on simulated images of known fiber diameter. Finally, all methods were compared for their reliable and accurate estimation of fiber diameter in electro spun nanofiber membranes based on obtained mean and standard deviation.

  13. Airborne digital-image data for monitoring the Colorado River corridor below Glen Canyon Dam, Arizona, 2009 - Image-mosaic production and comparison with 2002 and 2005 image mosaics

    USGS Publications Warehouse

    Davis, Philip A.

    2012-01-01

    Airborne digital-image data were collected for the Arizona part of the Colorado River ecosystem below Glen Canyon Dam in 2009. These four-band image data are similar in wavelength band (blue, green, red, and near infrared) and spatial resolution (20 centimeters) to image collections of the river corridor in 2002 and 2005. These periodic image collections are used by the Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey to monitor the effects of Glen Canyon Dam operations on the downstream ecosystem. The 2009 collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits, unlike the image sensors that GCMRC used in 2002 and 2005. This study examined the performance of the SH52 sensor, on the basis of the collected image data, and determined that the SH52 sensor provided superior data relative to the previously employed sensors (that is, an early ADS40 model and Zeiss Imaging's Digital Mapping Camera) in terms of band-image registration, dynamic range, saturation, linearity to ground reflectance, and noise level. The 2009 image data were provided as orthorectified segments of each flightline to constrain the size of the image files; each river segment was covered by 5 to 6 overlapping, linear flightlines. Most flightline images for each river segment had some surface-smear defects and some river segments had cloud shadows, but these two conditions did not generally coincide in the majority of the overlapping flightlines for a particular river segment. Therefore, the final image mosaic for the 450-kilometer (km)-long river corridor required careful selection and editing of numerous flightline segments (a total of 513 segments, each 3.2 km long) to minimize surface defects and cloud shadows. The final image mosaic has a total of only 3 km of surface defects. The final image mosaic for the western end of the corridor has areas of cloud shadow because of persistent inclement weather during data collection. This report presents visual comparisons of the 2002, 2005, and 2009 digital-image mosaics for various physical, biological, and cultural resources within the Colorado River ecosystem. All of the comparisons show the superior quality of the 2009 image data. In fact, the 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River.

  14. Technical advances of interventional fluoroscopy and flat panel image receptor.

    PubMed

    Lin, Pei-Jan Paul

    2008-11-01

    In the past decade, various radiation reducing devices and control circuits have been implemented on fluoroscopic imaging equipment. Because of the potential for lengthy fluoroscopic procedures in interventional cardiovascular angiography, these devices and control circuits have been developed for the cardiac catheterization laboratories and interventional angiography suites. Additionally, fluoroscopic systems equipped with image intensifiers have benefited from technological advances in x-ray tube, x-ray generator, and spectral shaping filter technologies. The high heat capacity x-ray tube, the medium frequency inverter generator with high performance switching capability, and the patient dose reduction spectral shaping filter had already been implemented on the image intensified fluoroscopy systems. These three underlying technologies together with the automatic dose rate and image quality (ADRIQ) control logic allow patients undergoing cardiovascular angiography procedures to benefit from "lower patient dose" with "high image quality." While photoconductor (or phosphor plate) x-ray detectors and signal capture thin film transistor (TFT) and charge coupled device (CCD) arrays are analog in nature, the advent of the flat panel image receptor allowed for fluoroscopy procedures to become more streamlined. With the analog-to-digital converter built into the data lines, the flat panel image receptor appears to become a digital device. While the transition from image intensified fluoroscopy systems to flat panel image receptor fluoroscopy systems is part of the on-going "digitization of imaging," the value of a flat panel image receptor may have to be evaluated with respect to patient dose, image quality, and clinical application capabilities. The advantage of flat panel image receptors has yet to be fully explored. For instance, the flat panel image receptor has its disadvantages as compared to the image intensifiers; the cost of the equipment is probably the most obvious. On the other hand, due to its wide dynamic range and linearity, lowering of patient dose beyond current practice could be achieved through the calibration process of the flat panel input dose rate being set to, for example, one half or less of current values. In this article various radiation saving devices and control circuits are briefly described. This includes various types of fluoroscopic systems designed to strive for reduction of patient exposure with the application of spectral shaping filters. The main thrust is to understand the ADRIQ control logic, through equipment testing, as it relates to clinical applications, and to show how this ADRIQ control logic "ties" those three technological advancements together to provide low radiation dose to the patient with high quality fluoroscopic images. Finally, rotational angiography with computed tomography (CT) and three dimensional (3-D) images utilizing flat panel technology will be reviewed as they pertain to diagnostic imaging in cardiovascular disease.

  15. Image quality, threshold contrast and mean glandular dose in CR mammography

    NASA Astrophysics Data System (ADS)

    Jakubiak, R. R.; Gamba, H. R.; Neves, E. B.; Peixoto, J. E.

    2013-09-01

    In many countries, computed radiography (CR) systems represent the majority of equipment used in digital mammography. This study presents a method for optimizing image quality and dose in CR mammography of patients with breast thicknesses between 45 and 75 mm. Initially, clinical images of 67 patients (group 1) were analyzed by three experienced radiologists, reporting about anatomical structures, noise and contrast in low and high pixel value areas, and image sharpness and contrast. Exposure parameters (kV, mAs and target/filter combination) used in the examinations of these patients were reproduced to determine the contrast-to-noise ratio (CNR) and mean glandular dose (MGD). The parameters were also used to radiograph a CDMAM (version 3.4) phantom (Artinis Medical Systems, The Netherlands) for image threshold contrast evaluation. After that, different breast thicknesses were simulated with polymethylmethacrylate layers and various sets of exposure parameters were used in order to determine optimal radiographic parameters. For each simulated breast thickness, optimal beam quality was defined as giving a target CNR to reach the threshold contrast of CDMAM images for acceptable MGD. These results were used for adjustments in the automatic exposure control (AEC) by the maintenance team. Using optimized exposure parameters, clinical images of 63 patients (group 2) were evaluated as described above. Threshold contrast, CNR and MGD for such exposure parameters were also determined. Results showed that the proposed optimization method was effective for all breast thicknesses studied in phantoms. The best result was found for breasts of 75 mm. While in group 1 there was no detection of the 0.1 mm critical diameter detail with threshold contrast below 23%, after the optimization, detection occurred in 47.6% of the images. There was also an average MGD reduction of 7.5%. The clinical image quality criteria were attended in 91.7% for all breast thicknesses evaluated in both patient groups. Finally, this study also concluded that the use of the AEC of the x-ray unit based on the constant dose to the detector may bring some difficulties to CR systems to operate under optimal conditions. More studies must be performed, so that the compatibility between systems and optimization methodologies can be evaluated, as well as this optimization method. Most methods are developed for phantoms, so comparative studies including clinical images must be developed.

  16. Automatic Classification of Station Quality by Image Based Pattern Recognition of Ppsd Plots

    NASA Astrophysics Data System (ADS)

    Weber, B.; Herrnkind, S.

    2017-12-01

    The number of seismic stations is growing and it became common practice to share station waveform data in real-time with the main data centers as IRIS, GEOFON, ORFEUS and RESIF. This made analyzing station performance of increasing importance for automatic real-time processing and station selection. The value of a station depends on different factors as quality and quantity of the data, location of the site and general station density in the surrounding area and finally the type of application it can be used for. The approach described by McNamara and Boaz (2006) became standard in the last decade. It incorporates a probability density function (PDF) to display the distribution of seismic power spectral density (PSD). The low noise model (LNM) and high noise model (HNM) introduced by Peterson (1993) are also displayed in the PPSD plots introduced by McNamara and Boaz allowing an estimation of the station quality. Here we describe how we established an automatic station quality classification module using image based pattern recognition on PPSD plots. The plots were split into 4 bands: short-period characteristics (0.1-0.8 s), body wave characteristics (0.8-5 s), microseismic characteristics (5-12 s) and long-period characteristics (12-100 s). The module sqeval connects to a SeedLink server, checks available stations, requests PPSD plots through the Mustang service from IRIS or PQLX/SQLX or from GIS (gempa Image Server), a module to generate different kind of images as trace plots, map plots, helicorder plots or PPSD plots. It compares the image based quality patterns for the different period bands with the retrieved PPSD plot. The quality of a station is divided into 5 classes for each of the 4 bands. Classes A, B, C, D define regular quality between LNM and HNM while the fifth class represents out of order stations with gain problems, missing data etc. Over all period bands about 100 different patterns are required to classify most of the stations available on the IRIS server. The results are written to a file and stations can be filtered by quality. AAAA represents the best quality in all 4 bands. Also a differentiation between instrument types as broad band and short period stations is possible. A regular check using the IRIS SeedLink and Mustang service allow users to be informed about new stations with a specific quality.

  17. A motion deblurring method with long/short exposure image pairs

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Hua, Weiping; Zhao, Jufeng; Gong, Xiaoli; Zhu, Liyao

    2018-01-01

    In this paper, a motion deblurring method with long/short exposure image pairs is presented. The long/short exposure image pairs are captured for the same scene under different exposure time. The image pairs are treated as the input of the deblurring method and more information could be used to obtain a deblurring result with high image quality. Firstly, the luminance equalization process is carried out to the short exposure image. And the blur kernel is estimated with the image pair under the maximum a posteriori (MAP) framework using conjugate gradient algorithm. Then a L0 image smoothing based denoising method is applied to the luminance equalized image. And the final deblurring result is obtained with the gain controlled residual image deconvolution process with the edge map as the gain map. Furthermore, a real experimental optical system is built to capture the image pair in order to demonstrate the effectiveness of the proposed deblurring framework. The long/short image pairs are obtained under different exposure time and camera gain control. Experimental results show that the proposed method could provide a superior deblurring result in both subjective and objective assessment compared with other deblurring approaches.

  18. SVM Pixel Classification on Colour Image Segmentation

    NASA Astrophysics Data System (ADS)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  19. An improved quantum watermarking scheme using small-scale quantum circuits and color scrambling

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Zhao, Ya; Xiao, Hong; Cao, Maojun

    2017-05-01

    In order to solve the problem of embedding the watermark into the quantum color image, in this paper, an improved scheme of using small-scale quantum circuits and color scrambling is proposed. Both color carrier image and color watermark image are represented using novel enhanced quantum representation. The image sizes for carrier and watermark are assumed to be 2^{n+1}× 2^{n+2} and 2n× 2n, respectively. At first, the color of pixels in watermark image is scrambled using the controlled rotation gates, and then, the scrambled watermark with 2^n× 2^n image size and 24-qubit gray scale is expanded to an image with 2^{n+1}× 2^{n+2} image size and 3-qubit gray scale. Finally, the expanded watermark image is embedded into the carrier image by the controlled-NOT gates. The extraction of watermark is the reverse process of embedding it into carrier image, which is achieved by applying operations in the reverse order. Simulation-based experimental results show that the proposed scheme is superior to other similar algorithms in terms of three items, visual quality, scrambling effect of watermark image, and noise resistibility.

  20. Generation of synthetic image sequences for the verification of matching and tracking algorithms for deformation analysis

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Jepping, C.; Luhmann, T.

    2013-04-01

    This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.

  1. Evaluation of conventional imaging performance in a research whole-body CT system with a photon-counting detector array.

    PubMed

    Yu, Zhicong; Leng, Shuai; Jorgensen, Steven M; Li, Zhoubo; Gutjahr, Ralf; Chen, Baiyu; Halaweish, Ahmed F; Kappler, Steffen; Yu, Lifeng; Ritman, Erik L; McCollough, Cynthia H

    2016-02-21

    This study evaluated the conventional imaging performance of a research whole-body photon-counting CT system and investigated its feasibility for imaging using clinically realistic levels of x-ray photon flux. This research system was built on the platform of a 2nd generation dual-source CT system: one source coupled to an energy integrating detector (EID) and the other coupled to a photon-counting detector (PCD). Phantom studies were conducted to measure CT number accuracy and uniformity for water, CT number energy dependency for high-Z materials, spatial resolution, noise, and contrast-to-noise ratio. The results from the EID and PCD subsystems were compared. The impact of high photon flux, such as pulse pile-up, was assessed by studying the noise-to-tube-current relationship using a neonate water phantom and high x-ray photon flux. Finally, clinical feasibility of the PCD subsystem was investigated using anthropomorphic phantoms, a cadaveric head, and a whole-body cadaver, which were scanned at dose levels equivalent to or higher than those used clinically. Phantom measurements demonstrated that the PCD subsystem provided comparable image quality to the EID subsystem, except that the PCD subsystem provided slightly better longitudinal spatial resolution and about 25% improvement in contrast-to-noise ratio for iodine. The impact of high photon flux was found to be negligible for the PCD subsystem: only subtle high-flux effects were noticed for tube currents higher than 300 mA in images of the neonate water phantom. Results of the anthropomorphic phantom and cadaver scans demonstrated comparable image quality between the EID and PCD subsystems. There were no noticeable ring, streaking, or cupping/capping artifacts in the PCD images. In addition, the PCD subsystem provided spectral information. Our experiments demonstrated that the research whole-body photon-counting CT system is capable of providing clinical image quality at clinically realistic levels of x-ray photon flux.

  2. Evaluation of conventional imaging performance in a research whole-body CT system with a photon-counting detector array

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Jorgensen, Steven M.; Li, Zhoubo; Gutjahr, Ralf; Chen, Baiyu; Halaweish, Ahmed F.; Kappler, Steffen; Yu, Lifeng; Ritman, Erik L.; McCollough, Cynthia H.

    2016-02-01

    This study evaluated the conventional imaging performance of a research whole-body photon-counting CT system and investigated its feasibility for imaging using clinically realistic levels of x-ray photon flux. This research system was built on the platform of a 2nd generation dual-source CT system: one source coupled to an energy integrating detector (EID) and the other coupled to a photon-counting detector (PCD). Phantom studies were conducted to measure CT number accuracy and uniformity for water, CT number energy dependency for high-Z materials, spatial resolution, noise, and contrast-to-noise ratio. The results from the EID and PCD subsystems were compared. The impact of high photon flux, such as pulse pile-up, was assessed by studying the noise-to-tube-current relationship using a neonate water phantom and high x-ray photon flux. Finally, clinical feasibility of the PCD subsystem was investigated using anthropomorphic phantoms, a cadaveric head, and a whole-body cadaver, which were scanned at dose levels equivalent to or higher than those used clinically. Phantom measurements demonstrated that the PCD subsystem provided comparable image quality to the EID subsystem, except that the PCD subsystem provided slightly better longitudinal spatial resolution and about 25% improvement in contrast-to-noise ratio for iodine. The impact of high photon flux was found to be negligible for the PCD subsystem: only subtle high-flux effects were noticed for tube currents higher than 300 mA in images of the neonate water phantom. Results of the anthropomorphic phantom and cadaver scans demonstrated comparable image quality between the EID and PCD subsystems. There were no noticeable ring, streaking, or cupping/capping artifacts in the PCD images. In addition, the PCD subsystem provided spectral information. Our experiments demonstrated that the research whole-body photon-counting CT system is capable of providing clinical image quality at clinically realistic levels of x-ray photon flux.

  3. Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand

    2018-01-01

    The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.

  4. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  5. Scatter measurement and correction method for cone-beam CT based on single grating scan

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  6. Validation of a Monte Carlo simulation of the Philips Allegro/GEMINI PET systems using GATE

    NASA Astrophysics Data System (ADS)

    Lamare, F.; Turzo, A.; Bizais, Y.; Cheze LeRest, C.; Visvikis, D.

    2006-02-01

    A newly developed simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop a Monte Carlo simulation of a fully three-dimensional (3D) clinical PET scanner. The Philips Allegro/GEMINI PET systems were simulated in order to (a) allow a detailed study of the parameters affecting the system's performance under various imaging conditions, (b) study the optimization and quantitative accuracy of emission acquisition protocols for dynamic and static imaging, and (c) further validate the potential of GATE for the simulation of clinical PET systems. A model of the detection system and its geometry was developed. The accuracy of the developed detection model was tested through the comparison of simulated and measured results obtained with the Allegro/GEMINI systems for a number of NEMA NU2-2001 performance protocols including spatial resolution, sensitivity and scatter fraction. In addition, an approximate model of the system's dead time at the level of detected single events and coincidences was developed in an attempt to simulate the count rate related performance characteristics of the scanner. The developed dead-time model was assessed under different imaging conditions using the count rate loss and noise equivalent count rates performance protocols of standard and modified NEMA NU2-2001 (whole body imaging conditions) and NEMA NU2-1994 (brain imaging conditions) comparing simulated with experimental measurements obtained with the Allegro/GEMINI PET systems. Finally, a reconstructed image quality protocol was used to assess the overall performance of the developed model. An agreement of <3% was obtained in scatter fraction, with a difference between 4% and 10% in the true and random coincidence count rates respectively, throughout a range of activity concentrations and under various imaging conditions, resulting in <8% differences between simulated and measured noise equivalent count rates performance. Finally, the image quality validation study revealed a good agreement in signal-to-noise ratio and contrast recovery coefficients for a number of different volume spheres and two different (clinical level based) tumour-to-background ratios. In conclusion, these results support the accurate modelling of the Philips Allegro/GEMINI PET systems using GATE in combination with a dead-time model for the signal flow description, which leads to an agreement of <10% in coincidence count rates under different imaging conditions and clinically relevant activity concentration levels.

  7. Integrated satellite data fusion and mining for monitoring lake water quality status of the Albufera de Valencia in Spain.

    PubMed

    Doña, Carolina; Chang, Ni-Bin; Caselles, Vicente; Sánchez, Juan M; Camacho, Antonio; Delegido, Jesús; Vannah, Benjamin W

    2015-03-15

    Lake eutrophication is a critical issue in the interplay of water supply, environmental management, and ecosystem conservation. Integrated sensing, monitoring, and modeling for a holistic lake water quality assessment with respect to multiple constituents is in acute need. The aim of this paper is to develop an integrated algorithm for data fusion and mining of satellite remote sensing images to generate daily estimates of some water quality parameters of interest, such as chlorophyll a concentrations and water transparency, to be applied for the assessment of the hypertrophic Albufera de Valencia. The Albufera de Valencia is the largest freshwater lake in Spain, which can often present values of chlorophyll a concentration over 200 mg m(-3) and values of transparency (Secchi Disk, SD) as low as 20 cm. Remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat Thematic Mapper (TM) and Enhance Thematic Mapper (ETM+) images were fused to carry out an integrative near-real time water quality assessment on a daily basis. Landsat images are useful to study the spatial variability of the water quality parameters, due to its spatial resolution of 30 m, in comparison to the low spatial resolution (250/500 m) of MODIS. While Landsat offers a high spatial resolution, the low temporal resolution of 16 days is a significant drawback to achieve a near real-time monitoring system. This gap may be bridged by using MODIS images that have a high temporal resolution of 1 day, in spite of its low spatial resolution. Synthetic Landsat images were fused for dates with no Landsat overpass over the study area. Finally, with a suite of ground truth data, a few genetic programming (GP) models were derived to estimate the water quality using the fused surface reflectance data as inputs. The GP model for chlorophyll a estimation yielded a R(2) of 0.94, with a Root Mean Square Error (RMSE) = 8 mg m(-3), and the GP model for water transparency estimation using Secchi disk showed a R(2) of 0.89, with an RMSE = 4 cm. With this effort, the spatiotemporal variations of water transparency and chlorophyll a concentrations may be assessed simultaneously on a daily basis throughout the lake for environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Automatic page layout using genetic algorithms for electronic albuming

    NASA Astrophysics Data System (ADS)

    Geigel, Joe; Loui, Alexander C. P.

    2000-12-01

    In this paper, we describe a flexible system for automatic page layout that makes use of genetic algorithms for albuming applications. The system is divided into two modules, a page creator module which is responsible for distributing images amongst various album pages, and an image placement module which positions images on individual pages. Final page layouts are specified in a textual form using XML for printing or viewing over the Internet. The system makes use of genetic algorithms, a class of search and optimization algorithms that are based on the concepts of biological evolution, for generating solutions with fitness based on graphic design preferences supplied by the user. The genetic page layout algorithm has been incorporated into a web-based prototype system for interactive page layout over the Internet. The prototype system is built using client-server architecture and is implemented in java. The system described in this paper has demonstrated the feasibility of using genetic algorithms for automated page layout in albuming and web-based imaging applications. We believe that the system adequately proves the validity of the concept, providing creative layouts in a reasonable number of iterations. By optimizing the layout parameters of the fitness function, we hope to further improve the quality of the final layout in terms of user preference and computation speed.

  9. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    PubMed

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing can significantly impact image quality when settings are left near default values.

  10. Pleiades image quality: from users' needs to products definition

    NASA Astrophysics Data System (ADS)

    Kubik, Philippe; Pascal, Véronique; Latry, Christophe; Baillarin, Simon

    2005-10-01

    Pleiades is the highest resolution civilian earth observing system ever developed in Europe. This imagery programme is conducted by the French National Space Agency, CNES. It will operate in 2008-2009 two agile satellites designed to provide optical images to civilian and defence users. Images will be simultaneously acquired in Panchromatic (PA) and multispectral (XS) mode, which allows, in Nadir acquisition condition, to deliver 20 km wide, false or natural colored scenes with a 70 cm ground sampling distance after PA+XS fusion. Imaging capabilities have been highly optimized in order to acquire along-track mosaics, stereo pairs and triplets, and multi-targets. To fulfill the operational requirements and ensure quick access to information, ground processing has to automatically perform the radiometrical and geometrical corrections. Since ground processing capabilities have been taken into account very early in the programme development, it has been possible to relax some costly on-board components requirements, in order to achieve a cost effective on-board/ground compromise. Starting from an overview of the system characteristics, this paper deals with the image products definition (raw level, perfect sensor, orthoimage and along-track orthomosaics), and the main processing steps. It shows how each system performance is a result of the satellite performance followed by an appropriate ground processing. Finally, it focuses on the radiometrical performances of final products which are intimately linked to the following processing steps : radiometrical corrections, PA restoration, image resampling and PAN-sharpening.

  11. An Adaptive Immune Genetic Algorithm for Edge Detection

    NASA Astrophysics Data System (ADS)

    Li, Ying; Bai, Bendu; Zhang, Yanning

    An adaptive immune genetic algorithm (AIGA) based on cost minimization technique method for edge detection is proposed. The proposed AIGA recommends the use of adaptive probabilities of crossover, mutation and immune operation, and a geometric annealing schedule in immune operator to realize the twin goals of maintaining diversity in the population and sustaining the fast convergence rate in solving the complex problems such as edge detection. Furthermore, AIGA can effectively exploit some prior knowledge and information of the local edge structure in the edge image to make vaccines, which results in much better local search ability of AIGA than that of the canonical genetic algorithm. Experimental results on gray-scale images show the proposed algorithm perform well in terms of quality of the final edge image, rate of convergence and robustness to noise.

  12. Reliability of a visual scoring system with fluorescent tracers to assess dermal pesticide exposure.

    PubMed

    Aragon, Aurora; Blanco, Luis; Lopez, Lylliam; Liden, Carola; Nise, Gun; Wesseling, Catharina

    2004-10-01

    We modified Fenske's semi-quantitative 'visual scoring system' of fluorescent tracer deposited on the skin of pesticide applicators and evaluated its reproducibility in the Nicaraguan setting. The body surface of 33 farmers, divided into 31 segments, was videotaped in the field after spraying with a pesticide solution containing a fluorescent tracer. A portable UV lamp was used for illumination in a foldaway dark room. The videos of five farmers were randomly selected. The scoring was based on a matrix with extension of fluorescent patterns (scale 0-5) on the ordinate and intensity (scale 0-5) on the abscissa, with the product of these two ranks as the final score for each body segment (0-25). Five medical students rated and evaluated the quality of 155 video images having undergone 4 h of training. Cronbach alpha coefficients and two-way random effects intraclass correlation coefficients (ICC) with absolute agreement were computed to assess inter-rater reliability. Consistency was high (Cronbach alpha = 0.96), but the scores differed substantially between raters. The overall ICC was satisfactory [0.75; 95% confidence interval (CI) = 0.62-0.83], but it was lower for intensity (0.54; 95% CI = 0.40-0.66) and higher for extension (0.80; 95% CI = 0.71-0.86). ICCs were lowest for images with low scores and evaluated as low quality, and highest for images with high scores and high quality. Inter-rater reliability coefficients indicate repeatability of the scoring system. However, field conditions for recording fluorescence should be improved to achieve higher quality images, and training should emphasize a better mechanism for the reading of body areas with low contamination.

  13. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  14. Design of a rear anamorphic attachment for digital cinematography

    NASA Astrophysics Data System (ADS)

    Cifuentes, A.; Valles, A.

    2008-09-01

    Digital taking systems for HDTV and now for the film industry present a particularly challenging design problem for rear adapters in general. The thick 3-channel prism block in the camera provides an important challenge in the design. In this paper the design of a 1.33x rear anamorphic attachment is presented. The new design departs significantly from the traditional Bravais condition due to the thick dichroic prism block. Design strategies for non-rotationally symmetric systems and fields of view are discussed. Anamorphic images intrinsically have a lower contrast and less resolution than their rotationally symmetric counterparts, therefore proper image evaluation must be considered. The interpretation of the traditional image quality methods applied to anamorphic images is also discussed in relation to the design process. The final design has a total track less than 50 mm, maintaining the telecentricity of the digital prime lens and taking full advantage of the f/1.4 prism block.

  15. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  16. Recent Developments in Computed Tomography for Urolithiasis: Diagnosis and Characterization

    PubMed Central

    Mc Laughlin, P. D.; Crush, L.; Maher, M. M.; O'Connor, O. J.

    2012-01-01

    Objective. To critically evaluate the current literature in an effort to establish the current role of radiologic imaging, advances in computed tomography (CT) and standard film radiography in the diagnosis, and characterization of urinary tract calculi. Conclusion. CT has a valuable role when utilized prudently during surveillance of patients following endourological therapy. In this paper, we outline the basic principles relating to the effects of exposure to ionizing radiation as a result of CT scanning. We discuss the current developments in low-dose CT technology, which have resulted in significant reductions in CT radiation doses (to approximately one-third of what they were a decade ago) while preserving image quality. Finally, we will discuss an important recent development now commercially available on the latest generation of CT scanners, namely, dual energy imaging, which is showing promise in urinary tract imaging as a means of characterizing the composition of urinary tract calculi. PMID:22952473

  17. Optical Design with Narrow-Band Imaging for a Capsule Endoscope.

    PubMed

    Yen, Chih-Ta; Lai, Zong-Wei; Lin, Yu-Ting; Cheng, Hsu-Chih

    2018-01-01

    The study proposes narrow-band imaging (NBI) lens design of 415 nm and 540 nm of a capsule endoscope (CE). The researches show that in terms of the rate of accuracy in detecting and screening neoplastic and nonneoplastic intestinal lesions, the NBI system outperformed that of traditional endoscopes and rivaled that of chromoendoscopes. In the proposed NBI CE optical system, the simulation result shows the field of view (FOV) was 109.8°; the modulation transfer function (MTF) could achieve 12.5% at 285 lp/mm and 34.1% at 144 lp/mm. The relative illumination reaches more than 60%, and the system total length was less than 4 mm. Finally, this design provides high-quality images for a 300-megapixel 1/4 ″ CMOS image sensor with a pixel size of 1.75  μ m.

  18. Very low-dose adult whole-body tumor imaging with F-18 FDG PET/CT

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Naveed, Muhammad; McGrath, Mary; Lisi, Michele; Lavalley, Cathy; Feiglin, David

    2015-03-01

    The aim of this study was to evaluate if effective radiation dose due to PET component in adult whole-body tumor imaging with time-of-flight F-18 FDG PET/CT could be significantly reduced. We retrospectively analyzed data for 10 patients with the body mass index ranging from 25 to 50. We simulated F-18 FDG dose reduction to 25% of the ACR recommended dose via reconstruction of simulated shorter acquisition time per bed position scans from the acquired list data. F-18 FDG whole-body scans were reconstructed using time-of-flight OSEM algorithm and advanced system modeling. Two groups of images were obtained: group A with a standard dose of F-18 FDG and standard reconstruction parameters and group B with simulated 25% dose and modified reconstruction parameters, respectively. Three nuclear medicine physicians blinded to the simulated activity independently reviewed the images and compared diagnostic quality of images. Based on the input from the physicians, we selected optimal modified reconstruction parameters for group B. In so obtained images, all the lesions observed in the group A were visible in the group B. The tumor SUV values were different in the group A, as compared to group B, respectively. However, no significant differences were reported in the final interpretation of the images from A and B groups. In conclusion, for a small number of patients, we have demonstrated that F-18 FDG dose reduction to 25% of the ACR recommended dose, accompanied by appropriate modification of the reconstruction parameters provided adequate diagnostic quality of PET images acquired on time-of-flight PET/CT.

  19. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  20. WE-G-204-07: Automated Characterization of Perceptual Quality of Clinical Chest Radiographs: Improvements in Lung, Spine, and Hardware Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, J; Zhang, L; Samei, E

    Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less

  1. TU-F-CAMPUS-I-04: Head-Only Asymmetric Gradient System Evaluation: ACR Image Quality and Acoustic Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weavers, P; Shu, Y; Tao, S

    Purpose: A high-performance head-only magnetic resonance imaging gradient system with an acquisition volume of 26 cm employing an asymmetric design for the transverse coils has been developed. It is able to reach a magnitude of 85 mT/m at a slew rate of 700 T/m/s, but operated at 80 mT/m and 500 T/m/s for this test. A challenge resulting from this asymmetric design is that the gradient nonlinearly exhibits both odd- and even-ordered terms, and as the full imaging field of view is often used, the nonlinearity is pronounced. The purpose of this work is to show the system can producemore » clinically useful images after an on-site gradient nonlinearity calibration and correction, and show that acoustic noise levels fall within non-significant risk (NSR) limits for standard clinical pulse sequences. Methods: The head-only gradient system was inserted into a standard 3T wide-bore scanner without acoustic damping. The ACR phantom was scanned in an 8-channel receive-only head coil and the standard American College of Radiology (ACR) MRI quality control (QC) test was performed. Acoustic noise levels were measured for several standard pulse sequences. Results: Images acquired with the head-only gradient system passed all ACR MR image quality tests; Both even and odd-order gradient distortion correction terms were required for the asymmetric gradients to pass. Acoustic noise measurements were within FDA NSR guidelines of 99 dBA (with assumed 20 dBA hearing protection) A-weighted and 140 dB for peak for all but one sequence. Note the gradient system was installed without any shroud or acoustic batting. We expect final system integration to greatly reduce noise experienced by the patient. Conclusion: A high-performance head-only asymmetric gradient system operating at 80 mT/m and 500 T/m/s conforms to FDA acoustic noise limits in all but one case, and passes all the ACR MR image quality control tests. This work was supported in part by the NIH grant 5R01EB010065.« less

  2. Pan Sharpening Quality Investigation of Turkish In-Operation Remote Sensing Satellites: Applications with Rasat and GÖKTÜRK-2 Images

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Topan, Hüseyin; Cam, Ali; Bayık, Çağlar

    2016-10-01

    Recently two optical remote sensing satellites, RASAT and GÖKTÜRK-2, launched successfully by the Republic of Turkey. RASAT has 7.5 m panchromatic, and 15 m visible bands whereas GÖKTÜRK-2 has 2.5 m panchromatic and 5 m VNIR (Visible and Near Infrared) bands. These bands with various resolutions can be fused by pan-sharpening methods which is an important application area of optical remote sensing imagery. So that, the high geometric resolution of panchromatic band and the high spectral resolution of VNIR bands can be merged. In the literature there are many pan-sharpening methods. However, there is not a standard framework for quality investigation of pan-sharpened imagery. The aim of this study is to investigate pan-sharpening performance of RASAT and GÖKTÜRK-2 images. For this purpose, pan-sharpened images are generated using most popular pan-sharpening methods IHS, Brovey and PCA at first. This procedure is followed by quantitative evaluation of pan-sharpened images using Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Average Spectral Error (RASE), Spectral Angle Mapper (SAM) and Erreur Relative Globale Adimensionnelle de Synthése (ERGAS) metrics. For generation of pan-sharpened images and computation of metrics SharpQ tool is used which is developed with MATLAB computing language. According to metrics, PCA derived pan-sharpened image is the most similar one to multispectral image for RASAT, and Brovey derived pan-sharpened image is the most similar one to multispectral image for GÖKTÜRK-2. Finally, pan-sharpened images are evaluated qualitatively in terms of object availability and completeness for various land covers (such as urban, forest and flat areas) by a group of operators who are experienced in remote sensing imagery.

  3. Characterization of cervigram image sharpness using multiple self-referenced measurements and random forest classifiers

    NASA Astrophysics Data System (ADS)

    Jaiswal, Mayoore; Horning, Matt; Hu, Liming; Ben-Or, Yau; Champlin, Cary; Wilson, Benjamin; Levitz, David

    2018-02-01

    Cervical cancer is the fourth most common cancer among women worldwide and is especially prevalent in low resource settings due to lack of screening and treatment options. Visual inspection with acetic acid (VIA) is a widespread and cost-effective screening method for cervical pre-cancer lesions, but accuracy depends on the experience level of the health worker. Digital cervicography, capturing images of the cervix, enables review by an off-site expert or potentially a machine learning algorithm. These reviews require images of sufficient quality. However, image quality varies greatly across users. A novel algorithm was developed to evaluate the sharpness of images captured with the MobileODT's digital cervicography device (EVA System), in order to, eventually provide feedback to the health worker. The key challenges are that the algorithm evaluates only a single image of each cervix, it needs to be robust to the variability in cervix images and fast enough to run in real time on a mobile device, and the machine learning model needs to be small enough to fit on a mobile device's memory, train on a small imbalanced dataset and run in real-time. In this paper, the focus scores of a preprocessed image and a Gaussian-blurred version of the image are calculated using established methods and used as features. A feature selection metric is proposed to select the top features which were then used in a random forest classifier to produce the final focus score. The resulting model, based on nine calculated focus scores, achieved significantly better accuracy than any single focus measure when tested on a holdout set of images. The area under the receiver operating characteristics curve was 0.9459.

  4. Digital X-Ray Imager Final Report CRADA No. TSB-1161-95

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logan, C.; Toker, E.

    The global objective of this cooperation was to lower the cost and improve the quality of breast health care in the United States. We planned to achieve it by designing a very high performance digital radiography unit for breast surgical specimen radiography in the operating room. These technical goals needed to be achieved at reasonable manufacturing costs to enable MedOptics to achieve high market penetration at a profit.

  5. An imaging method of wavefront coding system based on phase plate rotation

    NASA Astrophysics Data System (ADS)

    Yi, Rigui; Chen, Xi; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2018-01-01

    Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the point spread function of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.

  6. Heterogeneous sharpness for cross-spectral face recognition

    NASA Astrophysics Data System (ADS)

    Cao, Zhicheng; Schmid, Natalia A.

    2017-05-01

    Matching images acquired in different electromagnetic bands remains a challenging problem. An example of this type of comparison is matching active or passive infrared (IR) against a gallery of visible face images, known as cross-spectral face recognition. Among many unsolved issues is the one of quality disparity of the heterogeneous images. Images acquired in different spectral bands are of unequal image quality due to distinct imaging mechanism, standoff distances, or imaging environment, etc. To reduce the effect of quality disparity on the recognition performance, one can manipulate images to either improve the quality of poor-quality images or to degrade the high-quality images to the level of the quality of their heterogeneous counterparts. To estimate the level of discrepancy in quality of two heterogeneous images a quality metric such as image sharpness is needed. It provides a guidance in how much quality improvement or degradation is appropriate. In this work we consider sharpness as a relative measure of heterogeneous image quality. We propose a generalized definition of sharpness by first achieving image quality parity and then finding and building a relationship between the image quality of two heterogeneous images. Therefore, the new sharpness metric is named heterogeneous sharpness. Image quality parity is achieved by experimentally finding the optimal cross-spectral face recognition performance where quality of the heterogeneous images is varied using a Gaussian smoothing function with different standard deviation. This relationship is established using two models; one of them involves a regression model and the other involves a neural network. To train, test and validate the model, we use composite operators developed in our lab to extract features from heterogeneous face images and use the sharpness metric to evaluate the face image quality within each band. Images from three different spectral bands visible light, near infrared, and short-wave infrared are considered in this work. Both error of a regression model and validation error of a neural network are analyzed.

  7. Contribution of cardiac-induced brain pulsation to the noise of the diffusion tensor in Turboprop diffusion tensor imaging (DTI).

    PubMed

    Gui, Minzhi; Tamhane, Ashish A; Arfanakis, Konstantinos

    2008-05-01

    To assess the effects of cardiac-induced brain pulsation on the noise of the diffusion tensor in Turboprop (a form of periodically rotated overlapping parallel lines with enhanced reconstruction [PROPELLER] imaging) diffusion tensor imaging (DTI). A total of six healthy human subjects were imaged with cardiac-gated as well as nongated Turboprop DTI. Gated and nongated Turboprop DTI datasets were also simulated using actual data acquired exclusively during the diastolic or systolic period of the cardiac cycle. The total variance of the diffusion tensor (TVDT) was measured and compared between acquisitions. The TVDT near the ventricles was significantly reduced in cardiac-gated compared to nongated Turboprop DTI acquisitions. Furthermore, the effects of brain pulsation were reduced, but not eliminated, when increasing the amount of data collected. Finally, data corrupted by cardiac-induced pulsation were not consistently detected by the step of the conventional Turboprop reconstruction algorithm that evaluates the quality of data in different blades. Thus, the inherent quality weighting of the conventional Turboprop reconstruction algorithm was unable to compensate for the increased noise in the diffusion tensor due to brain pulsation. Cardiac-induced brain pulsation increases the TVDT in Turboprop DTI. Use of cardiac gating to limit data acquisition to the diastolic period of the cardiac cycle reduces the TVDT at the expense of imaging time. (c) 2008 Wiley-Liss, Inc.

  8. Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images

    NASA Astrophysics Data System (ADS)

    Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2016-10-01

    Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.

  9. Adapting smartphones for low-cost optical medical imaging

    NASA Astrophysics Data System (ADS)

    Pratavieira, Sebastião.; Vollet-Filho, José D.; Carbinatto, Fernanda M.; Blanco, Kate; Inada, Natalia M.; Bagnato, Vanderlei S.; Kurachi, Cristina

    2015-06-01

    Optical images have been used in several medical situations to improve diagnosis of lesions or to monitor treatments. However, most systems employ expensive scientific (CCD or CMOS) cameras and need computers to display and save the images, usually resulting in a high final cost for the system. Additionally, this sort of apparatus operation usually becomes more complex, requiring more and more specialized technical knowledge from the operator. Currently, the number of people using smartphone-like devices with built-in high quality cameras is increasing, which might allow using such devices as an efficient, lower cost, portable imaging system for medical applications. Thus, we aim to develop methods of adaptation of those devices to optical medical imaging techniques, such as fluorescence. Particularly, smartphones covers were adapted to connect a smartphone-like device to widefield fluorescence imaging systems. These systems were used to detect lesions in different tissues, such as cervix and mouth/throat mucosa, and to monitor ALA-induced protoporphyrin-IX formation for photodynamic treatment of Cervical Intraepithelial Neoplasia. This approach may contribute significantly to low-cost, portable and simple clinical optical imaging collection.

  10. Compressive sensing imaging through a drywall barrier at sub-THz and THz frequencies in transmission and reflection modes

    NASA Astrophysics Data System (ADS)

    Takan, Taylan; Özkan, Vedat A.; Idikut, Fırat; Yildirim, Ihsan Ozan; Şahin, Asaf B.; Altan, Hakan

    2014-10-01

    In this work sub-terahertz imaging using Compressive Sensing (CS) techniques for targets placed behind a visibly opaque barrier is demonstrated both experimentally and theoretically. Using a multiplied Schottky diode based millimeter wave source working at 118 GHz, metal cutout targets were illuminated in both reflection and transmission configurations with and without barriers which were made out of drywall. In both modes the image is spatially discretized using laser machined, 10 × 10 pixel metal apertures to demonstrate the technique of compressive sensing. The images were collected by modulating the source and measuring the transmitted flux through the apertures using a Golay cell. Experimental results were compared to simulations of the expected transmission through the metal apertures. Image quality decreases as expected when going from the non-obscured transmission case to the obscured transmission case and finally to the obscured reflection case. However, in all instances the image appears below the Nyquist rate which demonstrates that this technique is a viable option for Through the Wall Reflection Imaging (TWRI) applications.

  11. Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology

    NASA Astrophysics Data System (ADS)

    Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim

    2016-09-01

    Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.

  12. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  13. Imaging cells and sub-cellular structures with ultrahigh resolution full-field X-ray microscopy.

    PubMed

    Chien, C C; Tseng, P Y; Chen, H H; Hua, T E; Chen, S T; Chen, Y Y; Leng, W H; Wang, C H; Hwu, Y; Yin, G C; Liang, K S; Chen, F R; Chu, Y S; Yeh, H I; Yang, Y C; Yang, C S; Zhang, G L; Je, J H; Margaritondo, G

    2013-01-01

    Our experimental results demonstrate that full-field hard-X-ray microscopy is finally able to investigate the internal structure of cells in tissues. This result was made possible by three main factors: the use of a coherent (synchrotron) source of X-rays, the exploitation of contrast mechanisms based on the real part of the refractive index and the magnification provided by high-resolution Fresnel zone-plate objectives. We specifically obtained high-quality microradiographs of human and mouse cells with 29 nm Rayleigh spatial resolution and verified that tomographic reconstruction could be implemented with a final resolution level suitable for subcellular features. We also demonstrated that a phase retrieval method based on a wave propagation algorithm could yield good subcellular images starting from a series of defocused microradiographs. The concluding discussion compares cellular and subcellular hard-X-ray microradiology with other techniques and evaluates its potential impact on biomedical research. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. The effect of image sharpness on quantitative eye movement data and on image quality evaluation while viewing natural images

    NASA Astrophysics Data System (ADS)

    Vuori, Tero; Olkkonen, Maria

    2006-01-01

    The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.

  15. The w-effect in interferometric imaging: from a fast sparse measurement operator to superresolution

    NASA Astrophysics Data System (ADS)

    Dabbech, A.; Wolz, L.; Pratley, L.; McEwen, J. D.; Wiaux, Y.

    2017-11-01

    Modern radio telescopes, such as the Square Kilometre Array, will probe the radio sky over large fields of view, which results in large w-modulations of the sky image. This effect complicates the relationship between the measured visibilities and the image under scrutiny. In algorithmic terms, it gives rise to massive memory and computational time requirements. Yet, it can be a blessing in terms of reconstruction quality of the sky image. In recent years, several works have shown that large w-modulations promote the spread spectrum effect. Within the compressive sensing framework, this effect increases the incoherence between the sensing basis and the sparsity basis of the signal to be recovered, leading to better estimation of the sky image. In this article, we revisit the w-projection approach using convex optimization in realistic settings, where the measurement operator couples the w-terms in Fourier and the de-gridding kernels. We provide sparse, thus fast, models of the Fourier part of the measurement operator through adaptive sparsification procedures. Consequently, memory requirements and computational cost are significantly alleviated at the expense of introducing errors on the radio interferometric data model. We present a first investigation of the impact of the sparse variants of the measurement operator on the image reconstruction quality. We finally analyse the interesting superresolution potential associated with the spread spectrum effect of the w-modulation, and showcase it through simulations. Our c++ code is available online on GitHub.

  16. Fast and precise dense grid size measurement method based on coaxial dual optical imaging system

    NASA Astrophysics Data System (ADS)

    Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei

    2015-10-01

    Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.

  17. A 3D image sensor with adaptable charge subtraction scheme for background light suppression

    NASA Astrophysics Data System (ADS)

    Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.

    2013-02-01

    We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.

  18. New imaging systems in nuclear medicine. Final report, January 1, 1993--December 31, 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-12-31

    The aim of this program has been to improve the performance of positron emission tomography (PET) to achieve high resolution with high sensitivity. Towards this aim, the authors have carried out the following studies: (1) explored new techniques for detection of annihilation radiation including new detector materials and system geometries, specific areas that they have studied include--exploration of factors related to resolution and sensitivity of PET instrumentation including geometry, detection materials and coding, and the exploration of technique to improve the image quality by use of depth of interaction and increased sampling; (2) complete much of the final testing ofmore » PCR-II, an analog-coded cylindrical positron tomograph, developed and constructed during the current funding period; (3) developed the design of a positron microtomograph with mm resolution for quantitative studies in small animals, a single slice version of this device has been designed and studied by use of computer simulation; (4) continued and expanded the program of biological studies in animal models. Current studies have included imaging of animal models of Parkinson`s and Huntington`s disease and cancer. These studies have included new radiopharmaceuticals and techniques involving molecular biology.« less

  19. TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less

  20. SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less

  1. Quality indicators for musculoskeletal injury management in the emergency department: a systematic review.

    PubMed

    Strudwick, Kirsten; Nelson, Mark; Martin-Khan, Melinda; Bourke, Michael; Bell, Anthony; Russell, Trevor

    2015-02-01

    There is increasing importance placed on quality of health care for musculoskeletal injuries in emergency departments (EDs). This systematic review aimed to identify existing musculoskeletal quality indicators (QIs) developed for ED use and to critically evaluate their methodological quality. MEDLINE, EMBASE, CINAHL, and the gray literature, including relevant organizational websites, were searched in 2013. English-language articles were included that described the development of at least one QI related to the ED care of musculoskeletal injuries. Data extraction of each included article was conducted. A quality assessment was then performed by rating each relevant QI against the Appraisal of Indicators through Research and Evaluation (AIRE) Instrument. QIs with similar definitions were grouped together and categorized according to the health care quality frameworks of Donabedian and the Institute of Medicine. The search revealed 1,805 potentially relevant articles, of which 15 were finally included in the review. The number of relevant QIs per article ranged from one to 11, resulting in a total of 71 QIs overall. Pain (n = 17) and fracture management (n = 13) QIs were predominant. Ten QIs scored at least 50% across all AIRE Instrument domains, and these related to pain management and appropriate imaging of the spine. Methodological quality of the development of most QIs is poor. Recommendations for a core set of QIs that address the complete spectrum of musculoskeletal injury management in emergency medicine is not possible, and more work is needed. Currently, QIs with highest methodological quality are in the areas of pain management and medical imaging. © 2015 by the Society for Academic Emergency Medicine.

  2. Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.

    2016-01-15

    Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less

  3. Automatic retinal interest evaluation system (ARIES).

    PubMed

    Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang

    2014-01-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.

  4. Naturalness and interestingness of test images for visual quality evaluation

    NASA Astrophysics Data System (ADS)

    Halonen, Raisa; Westman, Stina; Oittinen, Pirkko

    2011-01-01

    Balanced and representative test images are needed to study perceived visual quality in various application domains. This study investigates naturalness and interestingness as image quality attributes in the context of test images. Taking a top-down approach we aim to find the dimensions which constitute naturalness and interestingness in test images and the relationship between these high-level quality attributes. We compare existing collections of test images (e.g. Sony sRGB images, ISO 12640 images, Kodak images, Nokia images and test images developed within our group) in an experiment combining quality sorting and structured interviews. Based on the data gathered we analyze the viewer-supplied criteria for naturalness and interestingness across image types, quality levels and judges. This study advances our understanding of subjective image quality criteria and enables the validation of current test images, furthering their development.

  5. Evaluation of the visual performance of image processing pipes: information value of subjective image attributes

    NASA Astrophysics Data System (ADS)

    Nyman, G.; Häkkinen, J.; Koivisto, E.-M.; Leisti, T.; Lindroos, P.; Orenius, O.; Virtanen, T.; Vuori, T.

    2010-01-01

    Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective image quality attributes to each test image were recorded. The use of positive and negative image quality attributes by the experimental subjects suggested a significant difference between the subjective spaces of low and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test images against their corresponding, average subjective attribute vector length data. The findings demonstrate the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable quality levels. We discuss the implications of these findings for the development of sensitive performance measures and methods in profiling image processing systems and their components, especially at high image quality levels.

  6. ESO imaging survey: infrared observations of CDF-S and HDF-S

    NASA Astrophysics Data System (ADS)

    Olsen, L. F.; Miralles, J.-M.; da Costa, L.; Benoist, C.; Vandame, B.; Rengelink, R.; Rité, C.; Scodeggio, M.; Slijkhuis, R.; Wicenec, A.; Zaggia, S.

    2006-06-01

    This paper presents infrared data obtained from observations carried out at the ESO 3.5 m New Technology Telescope (NTT) of the Hubble Deep Field South (HDF-S) and the Chandra Deep Field South (CDF-S). These data were taken as part of the ESO Imaging Survey (EIS) program, a public survey conducted by ESO to promote follow-up observations with the VLT. In the HDF-S field the infrared observations cover an area of ~53 square arcmin, encompassing the HST WFPC2 and STIS fields, in the JHKs passbands. The seeing measured in the final stacked images ranges from 0.79 arcsec to 1.22 arcsec and the median limiting magnitudes (AB system, 2'' aperture, 5σ detection limit) are J_AB˜23.0, H_AB˜22.8 and K_AB˜23.0 mag. Less complete data are also available in JKs for the adjacent HST NICMOS field. For CDF-S, the infrared observations cover a total area of ~100 square arcmin, reaching median limiting magnitudes (as defined above) of J_AB˜23.6 and K_AB˜22.7 mag. For one CDF-S field H band data are also available. This paper describes the observations and presents the results of new reductions carried out entirely through the un-supervised, high-throughput EIS Data Reduction System and its associated EIS/MVM C++-based image processing library developed, over the past 5 years, by the EIS project and now publicly available. The paper also presents source catalogs extracted from the final co-added images which are used to evaluate the scientific quality of the survey products, and hence the performance of the software. This is done comparing the results obtained in the present work with those obtained by other authors from independent data and/or reductions carried out with different software packages and techniques. The final science-grade catalogs together with the astrometrically and photometrically calibrated co-added images are available at CDS.

  7. Fluorescence confocal microscopy for pathologists.

    PubMed

    Ragazzi, Moira; Piana, Simonetta; Longo, Caterina; Castagnetti, Fabio; Foroni, Monica; Ferrari, Guglielmo; Gardini, Giorgio; Pellacani, Giovanni

    2014-03-01

    Confocal microscopy is a non-invasive method of optical imaging that may provide microscopic images of untreated tissue that correspond almost perfectly to hematoxylin- and eosin-stained slides. Nowadays, following two confocal imaging systems are available: (1) reflectance confocal microscopy, based on the natural differences in refractive indices of subcellular structures within the tissues; (2) fluorescence confocal microscopy, based on the use of fluorochromes, such as acridine orange, to increase the contrast epithelium-stroma. In clinical practice to date, confocal microscopy has been used with the goal of obviating the need for excision biopsies, thereby reducing the need for pathological examination. The aim of our study was to test fluorescence confocal microscopy on different types of surgical specimens, specifically breast, lymph node, thyroid, and colon. The confocal images were correlated to the corresponding histological sections in order to provide a morphologic parallel and to highlight current limitations and possible applications of this technology for surgical pathology practice. As a result, neoplastic tissues were easily distinguishable from normal structures and reactive processes such as fibrosis; the use of fluorescence enhanced contrast and image quality in confocal microscopy without compromising final histologic evaluation. Finally, the fluorescence confocal microscopy images of the adipose tissue were as accurate as those of conventional histology and were devoid of the frozen-section-related artefacts that can compromise intraoperative evaluation. Despite some limitations mainly related to black/white images, which require training in imaging interpretation, this study confirms that fluorescence confocal microscopy may represent an alternative to frozen sections in the assessment of margin status in selected settings or when the conservation of the specimen is crucial. This is the first study to employ fluorescent confocal microscopy on surgical specimens other than the skin and to evaluate the diagnostic capability of this technology from pathologists' viewpoint.

  8. Quality control in the recycling stream of PVC from window frames by hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Luciani, Valentina; Serranti, Silvia; Bonifazi, Giuseppe; Di Maio, Francesco; Rem, Peter

    2013-05-01

    Polyvinyl chloride (PVC) is one of the most commonly used thermoplastic materials in respect to the worldwide polymer consumption. PVC is mainly used in the building and construction sector, products such as pipes, window frames, cable insulation, floors, coverings, roofing sheets, etc. are realised utilising this material. In recent years, the problem of PVC waste disposal gained increasing importance in the public discussion. The quantity of used PVC items entering the waste stream is gradually increased as progressively greater numbers of PVC products approach to the end of their useful economic lives. The quality of the recycled PVC depends on the characteristics of the recycling process and the quality of the input waste. Not all PVC-containing waste streams have the same economic value. A transparent relation between value and composition is required to decide if the recycling process is cost effective for a particular waste stream. An objective and reliable quality control technique is needed in the recycling industry for the monitoring of both recycled flow streams and final products in the plant. In this work hyperspectral imaging technique in the near infrared (NIR) range (1000-1700 nm) was applied to identify unwanted plastic contaminants and rubber present in PVC coming from windows frame waste in order to assess a quality control procedure during its recycling process. Results showed as PVC, PE and rubber can be identified adopting the NIR-HSI approach.

  9. Implementing and validating of pan-sharpening algorithms in open-source software

    NASA Astrophysics Data System (ADS)

    Pesántez-Cobos, Paúl; Cánovas-García, Fulgencio; Alonso-Sarría, Francisco

    2017-10-01

    Several approaches have been used in remote sensing to integrate images with different spectral and spatial resolutions in order to obtain fused enhanced images. The objective of this research is three-fold. To implement in R three image fusion techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt); to apply these techniques to merging multispectral and panchromatic images from five different images with different spatial resolutions; finally, to evaluate the results using the universal image quality index (Q index) and the ERGAS index. As regards qualitative analysis, Landsat-7 and Landsat-8 show greater colour distortion with the three pansharpening methods, although the results for the other images were better. Q index revealed that HPF fusion performs better for the QuickBird, IKONOS and Landsat-7 images, followed by GS fusion; whereas in the case of Landsat-8 and Natmur-08 images, the results were more even. Regarding the ERGAS spatial index, the ACP algorithm performed better for the QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed closely by the GS algorithm. Only for the Landsat-8 image did, the GS fusion present the best result. In the evaluation of spectral components, HPF results tended to be better and ACP results worse, the opposite was the case with the spatial components. Better quantitative results are obtained in Landsat-7 and Landsat-8 images with the three fusion methods than with the QuickBird, IKONOS and Natmur-08 images. This contrasts with the qualitative evaluation reflecting the importance of splitting the two evaluation approaches (qualitative and quantitative). Significant disagreement may arise when different methodologies are used to asses the quality of an image fusion. Moreover, it is not possible to designate, a priori, a given algorithm as the best, not only because of the different characteristics of the sensors, but also because of the different atmospherics conditions or peculiarities of the different study areas, among other reasons.

  10. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  11. Analytical three-point Dixon method: With applications for spiral water-fat imaging.

    PubMed

    Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G

    2016-02-01

    The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.

  12. Multi-focus image fusion using a guided-filter-based difference image.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu

    2016-03-20

    The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.

  13. Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.

    PubMed

    Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang

    2016-10-10

    In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.

  14. WE-B-BRC-01: Current Methodologies in Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rath, F.

    Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less

  15. WE-B-BRC-00: Concepts in Risk-Based Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less

  16. WE-B-BRC-02: Risk Analysis and Incident Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fraass, B.

    Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less

  17. An improved segmentation method for defects inspection on steel roller surface

    NASA Astrophysics Data System (ADS)

    Xu, Jirui; Li, Xuekun; Cao, Yuzhong; Shi, Depeng; Yang, Jun; Jiang, Sheng; Rong, Yiming

    2018-05-01

    In the field of metal rolling, the quality of the steel roller's surface is significant for the final rolling products, e.g. metal sheets or foils. Besides the dimensional accuracy and surface roughness, the optical uniformity of the roller surface is also required for high quality rolling application. The typical optical defects of rollers after finish grinding include speckles, chatter marks, feed traces, and combination of all above. Unlike surface roughness, the optical defects can hardly be characterized by the topography or scanning electrical microscope measurement. Only the inspection by bared eyes of experienced engineers appears to be the effective manner for surface optical defects examination for large steel rollers. In this paper, an on-site machine vision system is designed to add on to the roller grinding machine to capture the surface image, and then an improved optical defects segmentation algorithm is developed based on the active contour model. Finally, experiments are carried out to verify the efficacy of the improved model.

  18. The multiple sclerosis visual pathway cohort: understanding neurodegeneration in MS.

    PubMed

    Martínez-Lapiscina, Elena H; Fraga-Pumar, Elena; Gabilondo, Iñigo; Martínez-Heras, Eloy; Torres-Torres, Ruben; Ortiz-Pérez, Santiago; Llufriu, Sara; Tercero, Ana; Andorra, Magi; Roca, Marc Figueras; Lampert, Erika; Zubizarreta, Irati; Saiz, Albert; Sanchez-Dalmau, Bernardo; Villoslada, Pablo

    2014-12-15

    Multiple Sclerosis (MS) is an immune-mediated disease of the Central Nervous System with two major underlying etiopathogenic processes: inflammation and neurodegeneration. The latter determines the prognosis of this disease. MS is the main cause of non-traumatic disability in middle-aged populations. The MS-VisualPath Cohort was set up to study the neurodegenerative component of MS using advanced imaging techniques by focusing on analysis of the visual pathway in a middle-aged MS population in Barcelona, Spain. We started the recruitment of patients in the early phase of MS in 2010 and it remains permanently open. All patients undergo a complete neurological and ophthalmological examination including measurements of physical and disability (Expanded Disability Status Scale; Multiple Sclerosis Functional Composite and neuropsychological tests), disease activity (relapses) and visual function testing (visual acuity, color vision and visual field). The MS-VisualPath protocol also assesses the presence of anxiety and depressive symptoms (Hospital Anxiety and Depression Scale), general quality of life (SF-36) and visual quality of life (25-Item National Eye Institute Visual Function Questionnaire with the 10-Item Neuro-Ophthalmic Supplement). In addition, the imaging protocol includes both retinal (Optical Coherence Tomography and Wide-Field Fundus Imaging) and brain imaging (Magnetic Resonance Imaging). Finally, multifocal Visual Evoked Potentials are used to perform neurophysiological assessment of the visual pathway. The analysis of the visual pathway with advance imaging and electrophysilogical tools in parallel with clinical information will provide significant and new knowledge regarding neurodegeneration in MS and provide new clinical and imaging biomarkers to help monitor disease progression in these patients.

  19. Methodological challenges and solutions in auditory functional magnetic resonance imaging

    PubMed Central

    Peelle, Jonathan E.

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI. PMID:25191218

  20. Windows on the human body--in vivo high-field magnetic resonance research and applications in medicine and psychology.

    PubMed

    Moser, Ewald; Meyerspeer, Martin; Fischmeister, Florian Ph S; Grabner, Günther; Bauer, Herbert; Trattnig, Siegfried

    2010-01-01

    Analogous to the evolution of biological sensor-systems, the progress in "medical sensor-systems", i.e., diagnostic procedures, is paradigmatically described. Outstanding highlights of this progress are magnetic resonance imaging (MRI) and spectroscopy (MRS), which enable non-invasive, in vivo acquisition of morphological, functional, and metabolic information from the human body with unsurpassed quality. Recent achievements in high and ultra-high field MR (at 3 and 7 Tesla) are described, and representative research applications in Medicine and Psychology in Austria are discussed. Finally, an overview of current and prospective research in multi-modal imaging, potential clinical applications, as well as current limitations and challenges is given.

  1. Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.

    PubMed

    Hartman, Douglas J

    2015-06-01

    Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.

    PubMed

    Hartman, Douglas J

    2016-03-01

    Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Appearance and characterization of fruit image textures for quality sorting using wavelet transform and genetic algorithms.

    PubMed

    Khoje, Suchitra

    2018-02-01

    Images of four qualities of mangoes and guavas are evaluated for color and textural features to characterize and classify them, and to model the fruit appearance grading. The paper discusses three approaches to identify most discriminating texture features of both the fruits. In the first approach, fruit's color and texture features are selected using Mahalanobis distance. A total of 20 color features and 40 textural features are extracted for analysis. Using Mahalanobis distance and feature intercorrelation analyses, one best color feature (mean of a* [L*a*b* color space]) and two textural features (energy a*, contrast of H*) are selected as features for Guava while two best color features (R std, H std) and one textural features (energy b*) are selected as features for mangoes with the highest discriminate power. The second approach studies some common wavelet families for searching the best classification model for fruit quality grading. The wavelet features extracted from five basic mother wavelets (db, bior, rbior, Coif, Sym) are explored to characterize fruits texture appearance. In third approach, genetic algorithm is used to select only those color and wavelet texture features that are relevant to the separation of the class, from a large universe of features. The study shows that image color and texture features which were identified using a genetic algorithm can distinguish between various qualities classes of fruits. The experimental results showed that support vector machine classifier is elected for Guava grading with an accuracy of 97.61% and artificial neural network is elected from Mango grading with an accuracy of 95.65%. The proposed method is nondestructive fruit quality assessment method. The experimental results has proven that Genetic algorithm along with wavelet textures feature has potential to discriminate fruit quality. Finally, it can be concluded that discussed method is an accurate, reliable, and objective tool to determine fruit quality namely Mango and Guava, and might be applicable to in-line sorting systems. © 2017 Wiley Periodicals, Inc.

  4. Rocky Mountain Arsenal, Sections 26 and 25 Contamination Survey. Phase 1

    DTIC Science & Technology

    1987-12-01

    mapping specifications for scale, overlap, density, and image quality. Utilizing the aerial photography and ground control described above, orthophoto ...base maps with superimposed contours will be prepared. £ 3-2 RMA06-D.1/TPGEO 1.3 11/20/87 Orthophoto negatives will be prepared directly at the final...cdial lnvestigation/Feasibiliy Studv (RI/FS) at tile Rocky Mountain Arsenal. Tasks 4 and 6 were prepared by ’Environmental Science and Engineering (ESE

  5. Building a 2.5D Digital Elevation Model from 2D Imagery

    NASA Technical Reports Server (NTRS)

    Padgett, Curtis W.; Ansar, Adnan I.; Brennan, Shane; Cheng, Yang; Clouse, Daniel S.; Almeida, Eduardo

    2013-01-01

    When projecting imagery into a georeferenced coordinate frame, one needs to have some model of the geographical region that is being projected to. This model can sometimes be a simple geometrical curve, such as an ellipse or even a plane. However, to obtain accurate projections, one needs to have a more sophisticated model that encodes the undulations in the terrain including things like mountains, valleys, and even manmade structures. The product that is often used for this purpose is a Digital Elevation Model (DEM). The technology presented here generates a high-quality DEM from a collection of 2D images taken from multiple viewpoints, plus pose data for each of the images and a camera model for the sensor. The technology assumes that the images are all of the same region of the environment. The pose data for each image is used as an initial estimate of the geometric relationship between the images, but the pose data is often noisy and not of sufficient quality to build a high-quality DEM. Therefore, the source imagery is passed through a feature-tracking algorithm and multi-plane-homography algorithm, which refine the geometric transforms between images. The images and their refined poses are then passed to a stereo algorithm, which generates dense 3D data for each image in the sequence. The 3D data from each image is then placed into a consistent coordinate frame and passed to a routine that divides the coordinate frame into a number of cells. The 3D points that fall into each cell are collected, and basic statistics are applied to determine the elevation of that cell. The result of this step is a DEM that is in an arbitrary coordinate frame. This DEM is then filtered and smoothed in order to remove small artifacts. The final step in the algorithm is to take the initial DEM and rotate and translate it to be in the world coordinate frame [such as UTM (Universal Transverse Mercator), MGRS (Military Grid Reference System), or geodetic] such that it can be saved in a standard DEM format and used for projection.

  6. Investigations of image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong

    1999-12-01

    The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.

  7. Sentinel-2: State of the Image Quality Calibration at the End of the Commissioning

    NASA Astrophysics Data System (ADS)

    Tremas, Thierry; Lonjou, Vincent; Lacherade, Sophie; Gaudel-Vacaresse, Angelique; Languille, Florie

    2016-08-01

    This article summarizes the activity of CNES during the In Orbit Calibration Phase of Sentinel 2A as well as the transfer of production of GIPP (Ground Image Processing Parameters) from CNES to ESRIN. The state of the main calibration parameters and performances, few months before PDGS is declared fully operational, are listed and explained.In radiometry a special attention is paid to the absolute calibration using the on-board diffuser, and the vicarious calibration methods using instrumented or statistically well characterized sites and inter- comparisons with other sensors. Regarding geometry, the presentation focuses on the performances of absolute location with and without reference points. The requirements of multi-band and multi-temporal registration are exposed. Finally, the construction and the rule of the GRI (Ground Reference Images) in the future are explained.

  8. Single image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction.

    PubMed

    Yang, Qi; Zhang, Yanzhu; Zhao, Tiebiao; Chen, YangQuan

    2017-04-04

    Image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction aims to recover detailed information from low-resolution images and reconstruct them into high-resolution images. Due to the limited amount of data and information retrieved from low-resolution images, it is difficult to restore clear, artifact-free images, while still preserving enough structure of the image such as the texture. This paper presents a new single image super-resolution method which is based on adaptive fractional-order gradient interpolation and reconstruction. The interpolated image gradient via optimal fractional-order gradient is first constructed according to the image similarity and afterwards the minimum energy function is employed to reconstruct the final high-resolution image. Fractional-order gradient based interpolation methods provide an additional degree of freedom which helps optimize the implementation quality due to the fact that an extra free parameter α-order is being used. The proposed method is able to produce a rich texture detail while still being able to maintain structural similarity even under large zoom conditions. Experimental results show that the proposed method performs better than current single image super-resolution techniques. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi

    2016-10-01

    In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.

  10. Fusion of GFP and phase contrast images with complex shearlet transform and Haar wavelet-based energy rule.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren

    2018-03-14

    Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.

  11. Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images

    NASA Astrophysics Data System (ADS)

    Kamble, V. M.; Bhurchandi, K.

    2018-03-01

    Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.

  12. Automated grain extraction and classification by combining improved region growing segmentation and shape descriptors in electromagnetic mill classification system

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian

    2018-04-01

    In this paper, the automatic method of grain detection and classification has been presented. As input, it uses a single digital image obtained from milling process of the copper ore with an high-quality digital camera. The grinding process is an extremely energy and cost consuming process, thus granularity evaluation process should be performed with high efficiency and time consumption. The method proposed in this paper is based on the three-stage image processing. First, using Seeded Region Growing (SRG) segmentation with proposed adaptive thresholding based on the calculation of Relative Standard Deviation (RSD) all grains are detected. In the next step results of the detection are improved using information about the shape of the detected grains using distance map. Finally, each grain in the sample is classified into one of the predefined granularity class. The quality of the proposed method has been obtained by using nominal granularity samples, also with a comparison to the other methods.

  13. Branding a College of Pharmacy

    PubMed Central

    2012-01-01

    In a possible future of supply-demand imbalance in pharmacy education, a brand that positively differentiates a college or school of pharmacy from its competitors may be the key to its survival. The nominal group technique, a structured group problem-solving and decision-making process, was used during a faculty retreat to identify and agree on the core qualities that define the brand image of Midwestern University’s College of Pharmacy in Glendale, AZ. Results from the retreat were provided to the faculty and students, who then proposed 168 mottos that embodied these qualities. Mottos were voted on by faculty members and pharmacy students. The highest ranked 24 choices were submitted to the faculty, who then selected the top 10 finalists. A final vote by students was used to select the winning motto. The methods described here may be useful to other colleges and schools of pharmacy that want to better define their own brand image and strengthen their organizational culture. PMID:23193330

  14. Quality control of the paracetamol drug by chemometrics and imaging spectroscopy in the near infrared region

    NASA Astrophysics Data System (ADS)

    Baptistao, Mariana; Rocha, Werickson Fortunato de Carvalho; Poppi, Ronei Jesus

    2011-09-01

    In this work, it was used imaging spectroscopy and chemometric tools for the development and analysis of paracetamol and excipients in pharmaceutical formulations. It was also built concentration maps to study the distribution of the drug in the tablets surface. Multivariate models based on PLS regression were developed for paracetamol and excipients concentrations prediction. For the construction of the models it was used 31 samples in the tablet form containing the active principle in a concentration range of 30.0-90.0% (w/w) and errors below to 5% were obtained for validation samples. Finally, the study of the distribution in the drug was performed through the distribution maps of concentration of active principle and excipients. The analysis of maps showed the complementarity between the active principle and excipients in the tablets. The region with a high concentration of a constituent must have, necessarily, absence or low concentration of the other one. Thus, an alternative method for the paracetamol drug quality monitoring is presented.

  15. Quality of care and economic considerations of active surveillance of men with prostate cancer

    PubMed Central

    2018-01-01

    The current health care climate mandates the delivery of high-value care for patients considering active surveillance for newly-diagnosed prostate cancer. Value is defined by increasing benefits (e.g., quality) for acceptable costs. This review discusses quality of care considerations for men contemplating active surveillance, and highlights cost implications at the patient, health-system, and societal level related to pursuit of non-interventional management of men diagnosed with localized prostate cancer. In general, most quality measures are focused on prostate cancer care in general, rather that active surveillance patients specifically. However, most prostate cancer quality measures are pertinent to men seeking close observation of their prostate tumors with active surveillance. These include accurate documentation of clinical stage, informed discussion of all treatment options, and appropriate use of imaging for less-aggressive prostate cancer. Furthermore, interventions that may help improve the quality of care for active surveillance patients are reviewed (e.g., quality collaboratives, judicious antibiotic use, etc.). Finally, the potential economic impact and benefits of broad acceptance of active surveillance strategies are highlighted. PMID:29732278

  16. Quality of care and economic considerations of active surveillance of men with prostate cancer.

    PubMed

    Filson, Christopher P

    2018-04-01

    The current health care climate mandates the delivery of high-value care for patients considering active surveillance for newly-diagnosed prostate cancer. Value is defined by increasing benefits (e.g., quality) for acceptable costs. This review discusses quality of care considerations for men contemplating active surveillance, and highlights cost implications at the patient, health-system, and societal level related to pursuit of non-interventional management of men diagnosed with localized prostate cancer. In general, most quality measures are focused on prostate cancer care in general, rather that active surveillance patients specifically. However, most prostate cancer quality measures are pertinent to men seeking close observation of their prostate tumors with active surveillance. These include accurate documentation of clinical stage, informed discussion of all treatment options, and appropriate use of imaging for less-aggressive prostate cancer. Furthermore, interventions that may help improve the quality of care for active surveillance patients are reviewed (e.g., quality collaboratives, judicious antibiotic use, etc.). Finally, the potential economic impact and benefits of broad acceptance of active surveillance strategies are highlighted.

  17. Quality-control issues on high-resolution diagnostic monitors.

    PubMed

    Parr, L F; Anderson, A L; Glennon, B K; Fetherston, P

    2001-06-01

    Previous literature indicates a need for more data collection in the area of quality control of high-resolution diagnostic monitors. Throughout acceptance testing, which began in June 2000, stability of monitor calibration was analyzed. Although image quality on all monitors was found to be acceptable upon initial acceptance testing using VeriLUM software by Image Smiths, Inc (Germantown, MD), it was determined to be unacceptable during the clinical phase of acceptance testing. High-resolution monitors were evaluated for quality assurance on a weekly basis from installation through acceptance testing and beyond. During clinical utilization determination (CUD), monitor calibration was identified as a problem and the manufacturer returned and recalibrated all workstations. From that time through final acceptance testing, high-resolution monitor calibration and monitor failure rate remained a problem. The monitor vendor then returned to the site to address these areas. Monitor defocus was still noticeable and calibration checks were increased to three times per week. White and black level drift on medium-resolution monitors had been attributed to raster size settings. Measurements of white and black level at several different size settings were taken to determine the effect of size on white and black level settings. Black level remained steady with size change. White level appeared to increase by 2.0 cd/m2 for every 0.1 inches decrease in horizontal raster size. This was determined not to be the cause of the observed brightness drift. Frequency of calibration/testing is an issue in a clinical environment. The increased frequency required at our site cannot be sustained. The medical physics division cannot provide dedicated personnel to conduct the quality-assurance testing on all monitors at this interval due to other physics commitments throughout the hospital. Monitor access is also an issue due to radiologists' need to read images. Some workstations are in use 7 AM to 11 PM daily. An appropriate monitor calibration frequency must be established during acceptance testing to ensure unacceptable drift is not masked by excessive calibration frequency. Standards for acceptable black level and white level drift also need to be determined. The monitor vendor and hospital staff agree that currently, very small printed text is an acceptable method of determining monitor blur, however, a better method of determining monitor blur is being pursued. Although monitors may show acceptable quality during initial acceptance testing, they need to show sustained quality during the clinical acceptance-testing phase. Defocus, black level, and white level are image quality concerns, which need to be evaluated during the clinical phase of acceptance testing. Image quality deficiencies can have a negative impact on patient care and raise serious medical-legal concerns. The attention to quality control required of the hospital staff needs to be realistic and not have a significant impact on radiology workflow.

  18. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  19. TH-A-16A-01: Image Quality for the Radiation Oncology Physicist: Review of the Fundamentals and Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seibert, J; Imbergamo, P

    The expansion and integration of diagnostic imaging technologies such as On Board Imaging (OBI) and Cone Beam Computed Tomography (CBCT) into radiation oncology has required radiation oncology physicists to be responsible for and become familiar with assessing image quality. Unfortunately many radiation oncology physicists have had little or no training or experience in measuring and assessing image quality. Many physicists have turned to automated QA analysis software without having a fundamental understanding of image quality measures. This session will review the basic image quality measures of imaging technologies used in the radiation oncology clinic, such as low contrast resolution, highmore » contrast resolution, uniformity, noise, and contrast scale, and how to measure and assess them in a meaningful way. Additionally a discussion of the implementation of an image quality assurance program in compliance with Task Group recommendations will be presented along with the advantages and disadvantages of automated analysis methods. Learning Objectives: Review and understanding of the fundamentals of image quality. Review and understanding of the basic image quality measures of imaging modalities used in the radiation oncology clinic. Understand how to implement an image quality assurance program and to assess basic image quality measures in a meaningful way.« less

  20. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  1. High resolution crustal image of South California Continental Borderland: Reverse time imaging including multiples

    NASA Astrophysics Data System (ADS)

    Bian, A.; Gantela, C.

    2014-12-01

    Strong multiples were observed in marine seismic data of Los Angeles Regional Seismic Experiment (LARSE).It is crucial to eliminate these multiples in conventional ray-based or one-way wave-equation based depth image methods. As long as multiples contain information of target zone along travelling path, it's possible to use them as signal, to improve the illumination coverage thus enhance the image quality of structural boundaries. Reverse time migration including multiples is a two-way wave-equation based prestack depth image method that uses both primaries and multiples to map structural boundaries. Several factors, including source wavelet, velocity model, back ground noise, data acquisition geometry and preprocessing workflow may influence the quality of image. The source wavelet is estimated from direct arrival of marine seismic data. Migration velocity model is derived from integrated model building workflow, and the sharp velocity interfaces near sea bottom needs to be preserved in order to generate multiples in the forward and backward propagation steps. The strong amplitude, low frequency marine back ground noise needs to be removed before the final imaging process. High resolution reverse time image sections of LARSE Lines 1 and Line 2 show five interfaces: depth of sea-bottom, base of sedimentary basins, top of Catalina Schist, a deep layer and a possible pluton boundary. Catalina Schist shows highs in the San Clemente ridge, Emery Knoll, Catalina Ridge, under Catalina Basin on both the lines, and a minor high under Avalon Knoll. The high of anticlinal fold in Line 1 is under the north edge of Emery Knoll and under the San Clemente fault zone. An area devoid of any reflection features are interpreted as sides of an igneous plume.

  2. Sentinel-2A image quality commissioning phase final results: geometric calibration and performances

    NASA Astrophysics Data System (ADS)

    Languille, F.; Gaudel, A.; Dechoz, C.; Greslou, D.; de Lussy, F.; Trémas, T.; Poulain, V.; Massera, S.

    2016-10-01

    In the frame of the Copernicus program of the European Commission, Sentinel-2 offers multispectral high-spatial-resolution optical images over global terrestrial surfaces. In cooperation with ESA, the Centre National d'Etudes Spatiales (CNES) is in charge of the image quality of the project, and so ensures the CAL/VAL commissioning phase during the months following the launch. Sentinel-2 is a constellation of 2 satellites on a polar sun-synchronous orbit with a revisit time of 5 days (with both satellites), a high field of view - 290km, 13 spectral bands in visible and shortwave infrared, and high spatial resolution - 10m, 20m and 60m. The Sentinel-2 mission offers a global coverage over terrestrial surfaces. The satellites acquire systematically terrestrial surfaces under the same viewing conditions in order to have temporal images stacks. The first satellite was launched in June 2015. Following the launch, the CAL/VAL commissioning phase is then lasting during 6 months for geometrical calibration. This paper will point on observations and results seen on Sentinel-2 images during commissioning phase. It will provide explanations about Sentinel-2 products delivered with geometric corrections. This paper will detail calibration sites, and the methods used for geometrical parameters calibration and will present linked results. The following topics will be presented: viewing frames orientation assessment, focal plane mapping for all spectral bands, results on geolocation assessment, and multispectral registration. There is a systematic images recalibration over a same reference which is a set of S2 images produced during the 6 months of CAL/VAL. This set of images will be presented as well as the geolocation performance and the multitemporal performance after refining over this ground reference.

  3. Design and fabrication of facial prostheses for cancer patient applying computer aided method and manufacturing (CADCAM)

    NASA Astrophysics Data System (ADS)

    Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana

    2016-12-01

    Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.

  4. Emission reduction by multipurpose buffer strips on arable fields.

    PubMed

    Sloots, K; van der Vlies, A W

    2007-01-01

    In the area managed by Hollandse Delta, agriculture is under great pressure and the social awareness of the agricultural sector is increasing steadily. In recent years, a stand-still has been observed in water quality, in terms of agrochemicals, and concentrations even exceed the standard. To improve the waterquality a multi-purpose Field Margin Regulation was drafted for the Hoeksche Waard island in 2005. The regulation prescribes a crop-free strip, 3.5 m wide, alongside wet drainage ditches. The strip must be sown with mixtures of grasses, flowers or herbs. No crop protection chemicals or fertilizer may be used on the strips. A total length of approximately 200 km of buffer strip has now been laid. Besides reducing emissions, the buffer strips also stimulate natural pest control methods and encourage local tourism. Finally, the strips should lead to an improvement in the farmers' image. The regulation has proved to be successful. The buffer strips boosted both local tourism and the image of the agricultural sector. Above all, the strips provided a natural shield for emission to surface water, which will lead to an improvement of the water quality and raise the farmers' awareness of water quality and the environment.

  5. Restoration of singularities in reconstructed phase of crystal image in electron holography.

    PubMed

    Li, Wei; Tanji, Takayoshi

    2014-12-01

    Off-axis electron holography can be used to measure the inner potential of a specimen from its reconstructed phase image and is thus a powerful technique for materials scientists. However, abrupt reversals of contrast from white to black may sometimes occur in a digitally reconstructed phase image, which results in inaccurate information. Such phase distortion is mainly due to the digital reconstruction process and weak electron wave amplitude in some areas of the specimen. Therefore, digital image processing can be applied to the reconstruction and restoration of phase images. In this paper, fringe reconnection processing is applied to phase image restoration of a crystal structure image. The disconnection and wrong connection of interference fringes in the hologram that directly cause a 2π phase jump imperfection are correctly reconnected. Experimental results show that the phase distortion is significantly reduced after the processing. The quality of the reconstructed phase image was improved by the removal of imperfections in the final phase. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Brain tumour classification and abnormality detection using neuro-fuzzy technique and Otsu thresholding.

    PubMed

    Renjith, Arokia; Manjula, P; Mohan Kumar, P

    2015-01-01

    Brain tumour is one of the main causes for an increase in transience among children and adults. This paper proposes an improved method based on Magnetic Resonance Imaging (MRI) brain image classification and image segmentation approach. Automated classification is encouraged by the need of high accuracy when dealing with a human life. The detection of the brain tumour is a challenging problem, due to high diversity in tumour appearance and ambiguous tumour boundaries. MRI images are chosen for detection of brain tumours, as they are used in soft tissue determinations. First of all, image pre-processing is used to enhance the image quality. Second, dual-tree complex wavelet transform multi-scale decomposition is used to analyse texture of an image. Feature extraction extracts features from an image using gray-level co-occurrence matrix (GLCM). Then, the Neuro-Fuzzy technique is used to classify the stages of brain tumour as benign, malignant or normal based on texture features. Finally, tumour location is detected using Otsu thresholding. The classifier performance is evaluated based on classification accuracies. The simulated results show that the proposed classifier provides better accuracy than previous method.

  7. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  8. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems.

    PubMed

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-25

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources' reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ' à trous ' through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ' à trous ' through fractal dimension maps as the best fusion algorithm for this ecosystem.

  9. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems

    PubMed Central

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-01

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources’ reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ‘à trous’ through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ‘à trous’ through fractal dimension maps as the best fusion algorithm for this ecosystem. PMID:28125055

  10. Texture-preserved penalized weighted least-squares reconstruction of low-dose CT image via image segmentation and high-order MRF modeling

    NASA Astrophysics Data System (ADS)

    Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong

    2016-03-01

    In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.

  11. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    PubMed

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  12. Clinical image quality evaluation for panoramic radiography in Korean dental clinics

    PubMed Central

    Choi, Bo-Ram; Choi, Da-Hye; Huh, Kyung-Hoe; Yi, Won-Jin; Heo, Min-Suk; Choi, Soon-Chul; Bae, Kwang-Hak

    2012-01-01

    Purpose The purpose of this study was to investigate the level of clinical image quality of panoramic radiographs and to analyze the parameters that influence the overall image quality. Materials and Methods Korean dental clinics were asked to provide three randomly selected panoramic radiographs. An oral and maxillofacial radiology specialist evaluated those images using our self-developed Clinical Image Quality Evaluation Chart. Three evaluators classified the overall image quality of the panoramic radiographs and evaluated the causes of imaging errors. Results A total of 297 panoramic radiographs were collected from 99 dental hospitals and clinics. The mean of the scores according to the Clinical Image Quality Evaluation Chart was 79.9. In the classification of the overall image quality, 17 images were deemed 'optimal for obtaining diagnostic information,' 153 were 'adequate for diagnosis,' 109 were 'poor but diagnosable,' and nine were 'unrecognizable and too poor for diagnosis'. The results of the analysis of the causes of the errors in all the images are as follows: 139 errors in the positioning, 135 in the processing, 50 from the radiographic unit, and 13 due to anatomic abnormality. Conclusion Panoramic radiographs taken at local dental clinics generally have a normal or higher-level image quality. Principal factors affecting image quality were positioning of the patient and image density, sharpness, and contrast. Therefore, when images are taken, the patient position should be adjusted with great care. Also, standardizing objective criteria of image density, sharpness, and contrast is required to evaluate image quality effectively. PMID:23071969

  13. Validation of 64Cu-DOTA-rituximab injection preparation under good manufacturing practices: a PET tracer for imaging of B-cell non-Hodgkin lymphoma.

    PubMed

    Natarajan, Arutselvan; Arksey, Natasha; Iagaru, Andrei; Chin, Frederick T; Gambhir, Sanjiv Sam

    2015-01-01

    Manufacturing of 64Cu-1,4,7,10-tetraazacyclododecane-N,N',N'',N'''-tetraacetic acid (DOTA)-rituximab injection under good manufacturing practices (GMP) was validated for imaging of patients with CD20+ B-cell non-Hodgkin lymphoma. Rituximab was purified by size exclusion high performance liquid chromatography (HPLC) and conjugated to DOTA-mono-(N-hydroxysuccinimidyl) ester. 64CuCl2, buffers, reagents, and other raw materials were obtained as high-grade quality. Following a semi-automated synthesis of 64Cu-DOTA-rituximab, a series of quality control tests was performed. The product was further tested in vivo using micro-positron emission tomography/computed tomography (PET/CT) to assess targeting ability towards human CD20 in transgenic mice. Three batches of 64Cu-DOTA-rituximab final product were prepared as per GMP specifications. The radiolabeling yield from these batches was 93.1 ± 5.8%; these provided final product with radiopharmaceutical yield, purity, and specific activity of 59.2 ± 5.1% (0.9 ± 0.1 GBq of 64Cu), > 95% (by HPLC and radio-thin layer chromatography), and 229.4 ± 43.3 GBq/µmol (or 1.5 ± 0.3 MBq/µg), respectively. The doses passed apyrogenicity and human serum stability specifications, were sterile up to 14 days, and retained > 60% immunoreactivity. In vivo micro-PET/CT mouse images at 24 hours postinjection showed that the tracer targeted the intended sites of human CD20 expression. Thus, we have validated the manufacturing of GMP grade 64Cu-DOTA-rituximab for injection in the clinical setting.

  14. Technical Note: FreeCT_wFBP: A robust, efficient, open-source implementation of weighted filtered backprojection for helical, fan-beam CT.

    PubMed

    Hoffman, John; Young, Stefano; Noo, Frédéric; McNitt-Gray, Michael

    2016-03-01

    With growing interest in quantitative imaging, radiomics, and CAD using CT imaging, the need to explore the impacts of acquisition and reconstruction parameters has grown. This usually requires extensive access to the scanner on which the data were acquired and its workflow is not designed for large-scale reconstruction projects. Therefore, the authors have developed a freely available, open-source software package implementing a common reconstruction method, weighted filtered backprojection (wFBP), for helical fan-beam CT applications. FreeCT_wFBP is a low-dependency, GPU-based reconstruction program utilizing c for the host code and Nvidia CUDA C for GPU code. The software is capable of reconstructing helical scans acquired with arbitrary pitch-values, and sampling techniques such as flying focal spots and a quarter-detector offset. In this work, the software has been described and evaluated for reconstruction speed, image quality, and accuracy. Speed was evaluated based on acquisitions of the ACR CT accreditation phantom under four different flying focal spot configurations. Image quality was assessed using the same phantom by evaluating CT number accuracy, uniformity, and contrast to noise ratio (CNR). Finally, reconstructed mass-attenuation coefficient accuracy was evaluated using a simulated scan of a FORBILD thorax phantom and comparing reconstructed values to the known phantom values. The average reconstruction time evaluated under all flying focal spot configurations was found to be 17.4 ± 1.0 s for a 512 row × 512 column × 32 slice volume. Reconstructions of the ACR phantom were found to meet all CT Accreditation Program criteria including CT number, CNR, and uniformity tests. Finally, reconstructed mass-attenuation coefficient values of water within the FORBILD thorax phantom agreed with original phantom values to within 0.0001 mm(2)/g (0.01%). FreeCT_wFBP is a fast, highly configurable reconstruction package for third-generation CT available under the GNU GPL. It shows good performance with both clinical and simulated data.

  15. A new approach for reducing beam hardening artifacts in polychromatic X-ray computed tomography using more accurate prior image.

    PubMed

    Wang, Hui; Xu, Yanan; Shi, Hongli

    2018-03-15

    Metal artifacts severely degrade CT image quality in clinical diagnosis, which are difficult to removed, especially for the beam hardening artifacts. The metal artifact reduction (MAR) based on prior images are the most frequently-used methods. However, there exists a lot misclassification in most prior images caused by absence of prior information such as spectrum distribution of X-ray beam source, especially when multiple or big metal are included. This work aims is to identify a more accurate prior image to improve image quality. The proposed method includes four steps. First, the metal image is segmented by thresholding an initial image, where the metal traces are identified in the initial projection data using the forward projection of the metal image. Second, the accurate absorbent model of certain metal image is calculated according to the spectrum distribution of certain X-ray beam source and energy-dependent attenuation coefficients of metal. Third, a new metal image is reconstructed by the general analytical reconstruction algorithm such as filtered back projection (FPB). The prior image is obtained by segmenting the difference image between the initial image and the new metal image into air, tissue and bone. Fourth, the initial projection data are normalized by dividing the projection data of prior image pixel to pixel. The final corrected image is obtained by interpolation, denormalization and reconstruction. Several clinical images with dental fillings and knee prostheses were used to evaluate the proposed algorithm and normalized metal artifact reduction (NMAR) and linear interpolation (LI) method. The results demonstrate the artifacts were reduced efficiently by the proposed method. The proposed method could obtain an exact prior image using the prior information about X-ray beam source and energy-dependent attenuation coefficients of metal. As a result, better performance of reducing beam hardening artifacts can be achieved. Moreover, the process of the proposed method is rather simple and little extra calculation burden is necessary. It has superiorities over other algorithms when include multiple and/or big implants.

  16. The study of surgical image quality evaluation system by subjective quality factor method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard

    2016-03-01

    GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.

  17. Achieving quality in cardiovascular imaging: proceedings from the American College of Cardiology-Duke University Medical Center Think Tank on Quality in Cardiovascular Imaging.

    PubMed

    Douglas, Pamela; Iskandrian, Ami E; Krumholz, Harlan M; Gillam, Linda; Hendel, Robert; Jollis, James; Peterson, Eric; Chen, Jersey; Masoudi, Frederick; Mohler, Emile; McNamara, Robert L; Patel, Manesh R; Spertus, John

    2006-11-21

    Cardiovascular imaging has enjoyed both rapid technological advances and sustained growth, yet less attention has been focused on quality than in other areas of cardiovascular medicine. To address this deficit, representatives from cardiovascular imaging societies, private payers, government agencies, the medical imaging industry, and experts in quality measurement met, and this report provides an overview of the discussions. A consensus definition of quality in imaging and a convergence of opinion on quality measures across imaging modalities was achieved and are intended to be the start of a process culminating in the development, dissemination, and adoption of quality measures for all cardiovascular imaging modalities.

  18. Interhospital network system using the worldwide web and the common gateway interface.

    PubMed

    Oka, A; Harima, Y; Nakano, Y; Tanaka, Y; Watanabe, A; Kihara, H; Sawada, S

    1999-05-01

    We constructed an interhospital network system using the worldwide web (WWW) and the Common Gateway Interface (CGI). Original clinical images are digitized and stored as a database for educational and research purposes. Personal computers (PCs) are available for data treatment and browsing. Our system is simple, as digitized images are stored into a Unix server machine. Images of important and interesting clinical cases are selected and registered into the image database using CGI. The main image format is 8- or 12-bit Joint Photographic Experts Group (JPEG) image. Original clinical images are finally stored in CD-ROM using a CD recorder. The image viewer can browse all of the images for one case at once as thumbnail pictures; image quality can be selected depending on the user's purpose. Using the network system, clinical images of interesting cases can be rapidly transmitted and discussed with other related hospitals. Data transmission from relational hospitals takes 1 to 2 minutes per 500 Kbyte of data. More distant hospitals (e.g., Rakusai Hospital, Kyoto) takes 1 minute more. The mean number of accesses our image database in a recent 3-month period was 470. There is a total about 200 cases in our image database, acquired over the past 2 years. Our system is useful for communication and image treatment between hospitals and we will describe the elements of our system and image database.

  19. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    PubMed Central

    Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun

    2017-01-01

    To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837

  20. TH-E-17A-07: Improved Cine Four-Dimensional Computed Tomography (4D CT) Acquisition and Processing Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castillo, S; Castillo, R; Castillo, E

    2014-06-15

    Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phasemore » sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase-sorted clinical acquisition.« less

  1. Blind image quality assessment without training on human opinion scores

    NASA Astrophysics Data System (ADS)

    Mittal, Anish; Soundararajan, Rajiv; Muralidhar, Gautam S.; Bovik, Alan C.; Ghosh, Joydeep

    2013-03-01

    We propose a family of image quality assessment (IQA) models based on natural scene statistics (NSS), that can predict the subjective quality of a distorted image without reference to a corresponding distortionless image, and without any training results on human opinion scores of distorted images. These `completely blind' models compete well with standard non-blind image quality indices in terms of subjective predictive performance when tested on the large publicly available `LIVE' Image Quality database.

  2. A two-view ultrasound CAD system for spina bifida detection using Zernike features

    NASA Astrophysics Data System (ADS)

    Konur, Umut; Gürgen, Fikret; Varol, Füsun

    2011-03-01

    In this work, we address a very specific CAD (Computer Aided Detection/Diagnosis) problem and try to detect one of the relatively common birth defects - spina bifida, in the prenatal period. To do this, fetal ultrasound images are used as the input imaging modality, which is the most convenient so far. Our approach is to decide using two particular types of views of the fetal neural tube. Transcerebellar head (i.e. brain) and transverse (axial) spine images are processed to extract features which are then used to classify healthy (normal), suspicious (probably defective) and non-decidable cases. Decisions raised by two independent classifiers may be individually treated, or if desired and data related to both modalities are available, those decisions can be combined to keep matters more secure. Even more security can be attained by using more than two modalities and base the final decision on all those potential classifiers. Our current system relies on feature extraction from images for cases (for particular patients). The first step is image preprocessing and segmentation to get rid of useless image pixels and represent the input in a more compact domain, which is hopefully more representative for good classification performance. Next, a particular type of feature extraction, which uses Zernike moments computed on either B/W or gray-scale image segments, is performed. The aim here is to obtain values for indicative markers that signal the presence of spina bifida. Markers differ depending on the image modality being used. Either shape or texture information captured by moments may propose useful features. Finally, SVM is used to train classifiers to be used as decision makers. Our experimental results show that a promising CAD system can be actualized for the specific purpose. On the other hand, the performance of such a system would highly depend on the qualities of image preprocessing, segmentation, feature extraction and comprehensiveness of image data.

  3. Cyanoacrylate glue as an alternative mounting medium for resin-embedded semithin sections.

    PubMed

    Liu, Pei-Yun; Phillips, Gael E; Kempf, Margit; Cuttle, Leila; Kimble, Roy M; McMillan, James R

    2010-01-01

    Commercially available generic Superglue (cyanoacrylate glue) can be used as an alternative mounting medium for stained resin-embedded semithin sections. It is colourless and contains a volatile, quick-setting solvent that produces permanent mounts of semithin sections for immediate inspection under the light microscope. Here, we compare the use of cyanoacrylate glue for mounting semithin sections with classical dibutyl phthalate xylene (DPX) in terms of practical usefulness, effectiveness and the quality of the final microscopic image.

  4. National Aeronautics and Space Administration (nasa)/american Society for Engineering Education (ASEE) Summer Faculty Fellowship Program, 1991, Volume 1

    NASA Technical Reports Server (NTRS)

    Hyman, William A. (Editor); Goldstein, Stanley H. (Editor)

    1991-01-01

    Presented here is a compilation of the final reports of the research projects done by the faculty members during the summer of 1991. Topics covered include optical correlation; lunar production and application of solar cells and synthesis of diamond film; software quality assurance; photographic image resolution; target detection using fractal geometry; evaluation of fungal metabolic compounds released to the air in a restricted environment; and planning and resource management in an intelligent automated power management system.

  5. Diagnosis of Pediatric Appendicitis: Is MR Imaging More Appropriate than CT

    DTIC Science & Technology

    2017-04-30

    month, day and year) along with the location of your presentation. It is important to update this information so that we can provide quality support for...than 30 days before final clearance is required to publish/present your materials. If you have any questions or concerns, please contact the S9 CRD...EDITIONS ARE OBSOLETE 50. DATE Page 3 of 3 Pages Abstract No: 16-077 ePoster 16-077 Author(s): James Covelli1, Justin Costello1, Christian

  6. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.

  7. Fully automated muscle quality assessment by Gabor filtering of second harmonic generation images

    NASA Astrophysics Data System (ADS)

    Paesen, Rik; Smolders, Sophie; Vega, José Manolo de Hoyos; Eijnde, Bert O.; Hansen, Dominique; Ameloot, Marcel

    2016-02-01

    Although structural changes on the sarcomere level of skeletal muscle are known to occur due to various pathologies, rigorous studies of the reduced sarcomere quality remain scarce. This can possibly be explained by the lack of an objective tool for analyzing and comparing sarcomere images across biological conditions. Recent developments in second harmonic generation (SHG) microscopy and increasing insight into the interpretation of sarcomere SHG intensity profiles have made SHG microscopy a valuable tool to study microstructural properties of sarcomeres. Typically, sarcomere integrity is analyzed by fitting a set of manually selected, one-dimensional SHG intensity profiles with a supramolecular SHG model. To circumvent this tedious manual selection step, we developed a fully automated image analysis procedure to map the sarcomere disorder for the entire image at once. The algorithm relies on a single-frequency wavelet-based Gabor approach and includes a newly developed normalization procedure allowing for unambiguous data interpretation. The method was validated by showing the correlation between the sarcomere disorder, quantified by the M-band size obtained from manually selected profiles, and the normalized Gabor value ranging from 0 to 1 for decreasing disorder. Finally, to elucidate the applicability of our newly developed protocol, Gabor analysis was used to study the effect of experimental autoimmune encephalomyelitis on the sarcomere regularity. We believe that the technique developed in this work holds great promise for high-throughput, unbiased, and automated image analysis to study sarcomere integrity by SHG microscopy.

  8. Endockscope: using mobile technology to create global point of service endoscopy.

    PubMed

    Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B; Dash, Atreya; Clayman, Ralph; Lee, Hak J

    2013-09-01

    Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy.

  9. Endockscope: Using Mobile Technology to Create Global Point of Service Endoscopy

    PubMed Central

    Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B.; Dash, Atreya; Clayman, Ralph

    2013-01-01

    Abstract Background and Purpose Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Materials and Methods Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Results Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Conclusion Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy. PMID:23701228

  10. Point spread function modeling and image restoration for cone-beam CT

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Huang, Kui-Dong; Shi, Yi-Kai; Xu, Zhe

    2015-03-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Young Scientists Fund of National Natural Science Foundation of China (51105315), Natural Science Basic Research Program of Shaanxi Province of China (2013JM7003) and Northwestern Polytechnical University Foundation for Fundamental Research (JC20120226, 3102014KYJD022)

  11. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  12. In vivo High Angular Resolution Diffusion-Weighted Imaging of Mouse Brain at 16.4 Tesla

    PubMed Central

    Alomair, Othman I.; Brereton, Ian M.; Smith, Maree T.; Galloway, Graham J.; Kurniawan, Nyoman D.

    2015-01-01

    Magnetic Resonance Imaging (MRI) of the rodent brain at ultra-high magnetic fields (> 9.4 Tesla) offers a higher signal-to-noise ratio that can be exploited to reduce image acquisition time or provide higher spatial resolution. However, significant challenges are presented due to a combination of longer T 1 and shorter T 2/T2* relaxation times and increased sensitivity to magnetic susceptibility resulting in severe local-field inhomogeneity artefacts from air pockets and bone/brain interfaces. The Stejskal-Tanner spin echo diffusion-weighted imaging (DWI) sequence is often used in high-field rodent brain MRI due to its immunity to these artefacts. To accurately determine diffusion-tensor or fibre-orientation distribution, high angular resolution diffusion imaging (HARDI) with strong diffusion weighting (b >3000 s/mm2) and at least 30 diffusion-encoding directions are required. However, this results in long image acquisition times unsuitable for live animal imaging. In this study, we describe the optimization of HARDI acquisition parameters at 16.4T using a Stejskal-Tanner sequence with echo-planar imaging (EPI) readout. EPI segmentation and partial Fourier encoding acceleration were applied to reduce the echo time (TE), thereby minimizing signal decay and distortion artefacts while maintaining a reasonably short acquisition time. The final HARDI acquisition protocol was achieved with the following parameters: 4 shot EPI, b = 3000 s/mm2, 64 diffusion-encoding directions, 125×150 μm2 in-plane resolution, 0.6 mm slice thickness, and 2h acquisition time. This protocol was used to image a cohort of adult C57BL/6 male mice, whereby the quality of the acquired data was assessed and diffusion tensor imaging (DTI) derived parameters were measured. High-quality images with high spatial and angular resolution, low distortion and low variability in DTI-derived parameters were obtained, indicating that EPI-DWI is feasible at 16.4T to study animal models of white matter (WM) diseases. PMID:26110770

  13. Radiometry rocks

    NASA Astrophysics Data System (ADS)

    Harvey, James E.

    2012-10-01

    Professor Bill Wolfe was an exceptional mentor for his graduate students, and he made a major contribution to the field of optical engineering by teaching the (largely ignored) principles of radiometry for over forty years. This paper describes an extension of Bill's work on surface scatter behavior and the application of the BRDF to practical optical engineering problems. Most currently-available image analysis codes require the BRDF data as input in order to calculate the image degradation from residual optical fabrication errors. This BRDF data is difficult to measure and rarely available for short EUV wavelengths of interest. Due to a smooth-surface approximation, the classical Rayleigh-Rice surface scatter theory cannot be used to calculate BRDFs from surface metrology data for even slightly rough surfaces. The classical Beckmann-Kirchhoff theory has a paraxial limitation and only provides a closed-form solution for Gaussian surfaces. Recognizing that surface scatter is a diffraction process, and by utilizing sound radiometric principles, we first developed a linear systems theory of non-paraxial scalar diffraction in which diffracted radiance is shift-invariant in direction cosine space. Since random rough surfaces are merely a superposition of sinusoidal phase gratings, it was a straightforward extension of this non-paraxial scalar diffraction theory to develop a unified surface scatter theory that is valid for moderately rough surfaces at arbitrary incident and scattered angles. Finally, the above two steps are combined to yield a linear systems approach to modeling image quality for systems suffering from a variety of image degradation mechanisms. A comparison of image quality predictions with experimental results taken from on-orbit Solar X-ray Imager (SXI) data is presented.

  14. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  15. Blind image quality assessment based on aesthetic and statistical quality-aware features

    NASA Astrophysics Data System (ADS)

    Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi

    2017-07-01

    The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.

  16. Evaluation of image quality of digital photo documentation of female genital injuries following sexual assault.

    PubMed

    Ernst, E J; Speck, Patricia M; Fitzpatrick, Joyce J

    2011-12-01

    With the patient's consent, physical injuries sustained in a sexual assault are evaluated and treated by the sexual assault nurse examiner (SANE) and documented on preprinted traumagrams and with photographs. Digital imaging is now available to the SANE for documentation of sexual assault injuries, but studies of the image quality of forensic digital imaging of female genital injuries after sexual assault were not found in the literature. The Photo Documentation Image Quality Scoring System (PDIQSS) was developed to rate the image quality of digital photo documentation of female genital injuries after sexual assault. Three expert observers performed evaluations on 30 separate images at two points in time. An image quality score, the sum of eight integral technical and anatomical attributes on the PDIQSS, was obtained for each image. Individual image quality ratings, defined by rating image quality for each of the data, were also determined. The results demonstrated a high level of image quality and agreement when measured in all dimensions. For the SANE in clinical practice, the results of this study indicate that a high degree of agreement exists between expert observers when using the PDIQSS to rate image quality of individual digital photographs of female genital injuries after sexual assault. © 2011 International Association of Forensic Nurses.

  17. Word recognition using a lexicon constrained by first/last character decisions

    NASA Astrophysics Data System (ADS)

    Zhao, Sheila X.; Srihari, Sargur N.

    1995-03-01

    In lexicon based recognition of machine-printed word images, the size of the lexicon can be quite extensive. The recognition performance is closely related to the size of the lexicon. Recognition performance drops quickly when lexicon size increases. Here, we present an algorithm to improve the word recognition performance by reducing the size of the given lexicon. The algorithm utilizes the information provided by the first and last characters of a word to reduce the size of the given lexicon. Given a word image and a lexicon that contains the word in the image, the first and last characters are segmented and then recognized by a character classifier. The possible candidates based on the results given by the classifier are selected, which give us the sub-lexicon. Then a word shape analysis algorithm is applied to produce the final ranking of the given lexicon. The algorithm was tested on a set of machine- printed gray-scale word images which includes a wide range of print types and qualities.

  18. Space-time light field rendering.

    PubMed

    Wang, Huamin; Sun, Mingxuan; Yang, Ruigang

    2007-01-01

    In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.

  19. Image aesthetic quality evaluation using convolution neural network embedded learning

    NASA Astrophysics Data System (ADS)

    Li, Yu-xin; Pu, Yuan-yuan; Xu, Dan; Qian, Wen-hua; Wang, Li-peng

    2017-11-01

    A way of embedded learning convolution neural network (ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale data but also score the image aesthetic quality. First, we chose Alexnet and VGG_S to compare for confirming which is more suitable for this image aesthetic quality evaluation task. Second, to further boost the image aesthetic quality classification performance, we employ the image content to train aesthetic quality classification models. But the training samples become smaller and only using once fine-tuning cannot make full use of the small-scale data set. Third, to solve the problem in second step, a way of using twice fine-tuning continually based on the aesthetic quality label and content label respective is proposed, the classification probability of the trained CNN models is used to evaluate the image aesthetic quality. The experiments are carried on the small-scale data set of Photo Quality. The experiment results show that the classification accuracy rates of our approach are higher than the existing image aesthetic quality evaluation approaches.

  20. Fundamental techniques for resolution enhancement of average subsampled images

    NASA Astrophysics Data System (ADS)

    Shen, Day-Fann; Chiu, Chui-Wen

    2012-07-01

    Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.

  1. Combination of image descriptors for the exploration of cultural photographic collections

    NASA Astrophysics Data System (ADS)

    Bhowmik, Neelanjan; Gouet-Brunet, Valérie; Bloch, Gabriel; Besson, Sylvain

    2017-01-01

    The rapid growth of image digitization and collections in recent years makes it challenging and burdensome to organize, categorize, and retrieve similar images from voluminous collections. Content-based image retrieval (CBIR) is immensely convenient in this context. A considerable number of local feature detectors and descriptors are present in the literature of CBIR. We propose a model to anticipate the best feature combinations for image retrieval-related applications. Several spatial complementarity criteria of local feature detectors are analyzed and then engaged in a regression framework to find the optimal combination of detectors for a given dataset and are better adapted for each given image; the proposed model is also useful to optimally fix some other parameters, such as the k in k-nearest neighbor retrieval. Three public datasets of various contents and sizes are employed to evaluate the proposal, which is legitimized by improving the quality of retrieval notably facing classical approaches. Finally, the proposed image search engine is applied to the cultural photographic collections of a French museum, where it demonstrates its added value for the exploration and promotion of these contents at different levels from their archiving up to their exhibition in or ex situ.

  2. A fast non-local means algorithm based on integral image and reconstructed similar kernel

    NASA Astrophysics Data System (ADS)

    Lin, Zheng; Song, Enmin

    2018-03-01

    Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.

  3. A Shearlet-based algorithm for quantum noise removal in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Zhang, Aguan; Jiang, Huiqin; Ma, Ling; Liu, Yumin; Yang, Xiaopeng

    2016-03-01

    Low-dose CT (LDCT) scanning is a potential way to reduce the radiation exposure of X-ray in the population. It is necessary to improve the quality of low-dose CT images. In this paper, we propose an effective algorithm for quantum noise removal in LDCT images using shearlet transform. Because the quantum noise can be simulated by Poisson process, we first transform the quantum noise by using anscombe variance stabilizing transform (VST), producing an approximately Gaussian noise with unitary variance. Second, the non-noise shearlet coefficients are obtained by adaptive hard-threshold processing in shearlet domain. Third, we reconstruct the de-noised image using the inverse shearlet transform. Finally, an anscombe inverse transform is applied to the de-noised image, which can produce the improved image. The main contribution is to combine the anscombe VST with the shearlet transform. By this way, edge coefficients and noise coefficients can be separated from high frequency sub-bands effectively. A number of experiments are performed over some LDCT images by using the proposed method. Both quantitative and visual results show that the proposed method can effectively reduce the quantum noise while enhancing the subtle details. It has certain value in clinical application.

  4. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  5. Digital radiography: optimization of image quality and dose using multi-frequency software.

    PubMed

    Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D

    2012-09-01

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.

  6. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system.

    PubMed

    Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R

    2013-07-01

    The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.

  7. Windows on the Human Body – in Vivo High-Field Magnetic Resonance Research and Applications in Medicine and Psychology

    PubMed Central

    Moser, Ewald; Meyerspeer, Martin; Fischmeister, Florian Ph. S.; Grabner, Günther; Bauer, Herbert; Trattnig, Siegfried

    2010-01-01

    Analogous to the evolution of biological sensor-systems, the progress in “medical sensor-systems”, i.e., diagnostic procedures, is paradigmatically described. Outstanding highlights of this progress are magnetic resonance imaging (MRI) and spectroscopy (MRS), which enable non-invasive, in vivo acquisition of morphological, functional, and metabolic information from the human body with unsurpassed quality. Recent achievements in high and ultra-high field MR (at 3 and 7 Tesla) are described, and representative research applications in Medicine and Psychology in Austria are discussed. Finally, an overview of current and prospective research in multi-modal imaging, potential clinical applications, as well as current limitations and challenges is given. PMID:22219684

  8. Fast estimation of first-order scattering in a medical x-ray computed tomography scanner using a ray-tracing technique.

    PubMed

    Liu, Xin

    2014-01-01

    This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.

  9. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  10. Importance of methodology on (99m)technetium dimercapto-succinic acid scintigraphic image quality: imaging pilot study for RIVUR (Randomized Intervention for Children With Vesicoureteral Reflux) multicenter investigation.

    PubMed

    Ziessman, Harvey A; Majd, Massoud

    2009-07-01

    We reviewed our experience with (99m)technetium dimercapto-succinic acid scintigraphy obtained during an imaging pilot study for a multicenter investigation (Randomized Intervention for Children With Vesicoureteral Reflux) of the effectiveness of daily antimicrobial prophylaxis for preventing recurrent urinary tract infection and renal scarring. We analyzed imaging methodology and its relation to diagnostic image quality. (99m)Technetium dimercapto-succinic acid imaging guidelines were provided to participating sites. High-resolution planar imaging with parallel hole or pinhole collimation was required. Two core reviewers evaluated all submitted images. Analysis included appropriate views, presence or lack of patient motion, adequate magnification, sufficient counts and diagnostic image quality. Inter-reader agreement was evaluated. We evaluated 70, (99m)technetium dimercapto-succinic acid studies from 14 institutions. Variability was noted in methodology and image quality. Correlation (r value) between dose administered and patient age was 0.780. For parallel hole collimator imaging good correlation was noted between activity administered and counts (r = 0.800). For pinhole imaging the correlation was poor (r = 0.110). A total of 10 studies (17%) were rejected for quality issues of motion, kidney overlap, inadequate magnification, inadequate counts and poor quality images. The submitting institution was informed and provided with recommendations for improving quality, and resubmission of another study was required. Only 4 studies (6%) were judged differently by the 2 reviewers, and the differences were minor. Methodology and image quality for (99m)technetium dimercapto-succinic acid scintigraphy varied more than expected between institutions. The most common reason for poor image quality was inadequate count acquisition with insufficient attention to the tradeoff between administered dose, length of image acquisition, start time of imaging and resulting image quality. Inter-observer core reader agreement was high. The pilot study ensured good diagnostic quality standardized images for the Randomized Intervention for Children With Vesicoureteral Reflux investigation.

  11. How to design PET experiments to study neurochemistry: application to alcoholism.

    PubMed

    Morris, Evan D; Lucas, Molly V; Petrulli, J Ryan; Cosgrove, Kelly P

    2014-03-01

    Positron Emission Tomography (PET) (and the related Single Photon Emission Computed Tomography) is a powerful imaging tool with a molecular specificity and sensitivity that are unique among imaging modalities. PET excels in the study of neurochemistry in three ways: 1) It can detect and quantify neuroreceptor molecules; 2) it can detect and quantify changes in neurotransmitters; and 3) it can detect and quantify exogenous drugs delivered to the brain. To carry out any of these applications, the user must harness the power of kinetic modeling. Further, the quality of the information gained is only as good as the soundness of the experimental design. This article reviews the concepts behind the three main uses of PET, the rationale behind kinetic modeling of PET data, and some of the key considerations when planning a PET experiment. Finally, some examples of PET imaging related to the study of alcoholism are discussed and critiqued.

  12. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  13. Shrink-wrapped isosurface from cross sectional images

    PubMed Central

    Choi, Y. K.; Hahn, J. K.

    2010-01-01

    Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361

  14. How to Design PET Experiments to Study Neurochemistry: Application to Alcoholism

    PubMed Central

    Morris, Evan D.; Lucas, Molly V.; Petrulli, J. Ryan; Cosgrove, Kelly P.

    2014-01-01

    Positron Emission Tomography (PET) (and the related Single Photon Emission Computed Tomography) is a powerful imaging tool with a molecular specificity and sensitivity that are unique among imaging modalities. PET excels in the study of neurochemistry in three ways: 1) It can detect and quantify neuroreceptor molecules; 2) it can detect and quantify changes in neurotransmitters; and 3) it can detect and quantify exogenous drugs delivered to the brain. To carry out any of these applications, the user must harness the power of kinetic modeling. Further, the quality of the information gained is only as good as the soundness of the experimental design. This article reviews the concepts behind the three main uses of PET, the rationale behind kinetic modeling of PET data, and some of the key considerations when planning a PET experiment. Finally, some examples of PET imaging related to the study of alcoholism are discussed and critiqued. PMID:24600335

  15. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    NASA Astrophysics Data System (ADS)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  16. High resolution earth observation from geostationary orbit by optical aperture synthesys

    NASA Astrophysics Data System (ADS)

    Mesrine, M.; Thomas, E.; Garin, S.; Blanc, P.; Alis, C.; Cassaing, F.; Laubier, D.

    2017-11-01

    In this paper, we describe Optical Aperture Synthesis (OAS) imaging instrument concepts studied by Alcatel Alenia Space under a CNES R&T contract in term of technical feasibility. First, the methodology to select the aperture configuration is proposed, based on the definition and quantification of image quality criteria adapted to an OAS instrument for direct imaging of extended objects. The following section presents, for each interferometer type (Michelson and Fizeau), the corresponding optical configurations compatible with a large field of view from GEO orbit. These optical concepts take into account the constraints imposed by the foreseen resolution and the implementation of the co-phasing functions. The fourth section is dedicated to the analysis of the co-phasing methodologies, from the configuration deployment to the fine stabilization during observation. Finally, we present a trade-off analysis allowing to select the concept wrt mission specification and constraints related to instrument accommodation under launcher shroud and in-orbit deployment.

  17. Rolling Shutter Effect aberration compensation in Digital Holographic Microscopy

    NASA Astrophysics Data System (ADS)

    Monaldi, Andrea C.; Romero, Gladis G.; Cabrera, Carlos M.; Blanc, Adriana V.; Alanís, Elvio E.

    2016-05-01

    Due to the sequential-readout nature of most CMOS sensors, each row of the sensor array is exposed at a different time, resulting in the so-called rolling shutter effect that induces geometric distortion to the image if the video camera or the object moves during image acquisition. Particularly in digital holograms recording, while the sensor captures progressively each row of the hologram, interferometric fringes can oscillate due to external vibrations and/or noises even when the object under study remains motionless. The sensor records each hologram row in different instants of these disturbances. As a final effect, phase information is corrupted, distorting the reconstructed holograms quality. We present a fast and simple method for compensating this effect based on image processing tools. The method is exemplified by holograms of microscopic biological static objects. Results encourage incorporating CMOS sensors over CCD in Digital Holographic Microscopy due to a better resolution and less expensive benefits.

  18. Image Quality Assessment of High-Resolution Satellite Images with Mtf-Based Fuzzy Comprehensive Evaluation Method

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.

    2018-04-01

    A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.

  19. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    PubMed

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  20. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on any type of computer and will soon be downloadable from the net (http://rsb.info.nih.gov/ij/plugins or http://nucleartoolkit.free.fr).

  1. Image quality scaling of electrophotographic prints

    NASA Astrophysics Data System (ADS)

    Johnson, Garrett M.; Patil, Rohit A.; Montag, Ethan D.; Fairchild, Mark D.

    2003-12-01

    Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the experiment, representing photographs, business graphics, and test-targets. The two experiments were split into a paired-comparison experiment examining overall image quality, and a triad experiment judging overall similarity and dissimilarity of the printed images. The paired-comparison experiment was analyzed using Thurstone's Law, to generate an interval scale of quality, and with dual scaling, to determine the independent dimensions used for categorical scaling. The triad experiment was analyzed using multidimensional scaling to generate a psychological stimulus space. The psychophysical results indicated that the image quality was judged mainly along one dimension and that the relationships among the images can be described with a single dimension in most cases. Regression of various physical measurements of the images to the paired comparison results showed that a small number of physical attributes of the images could be correlated with the psychophysical scale of image quality. However, global image difference metrics did not correlate well with image quality.

  2. SFM Technique and Focus Stacking for Digital Documentation of Archaeological Artifacts

    NASA Astrophysics Data System (ADS)

    Clini, P.; Frapiccini, N.; Mengoni, M.; Nespeca, R.; Ruggeri, L.

    2016-06-01

    Digital documentation and high-quality 3D representation are always more requested in many disciplines and areas due to the large amount of technologies and data available for fast, detailed and quick documentation. This work aims to investigate the area of medium and small sized artefacts and presents a fast and low cost acquisition system that guarantees the creation of 3D models with an high level of detail, making the digitalization of cultural heritage a simply and fast procedure. The 3D models of the artefacts are created with the photogrammetric technique Structure From Motion that makes it possible to obtain, in addition to three-dimensional models, high-definition images for a deepened study and understanding of the artefacts. For the survey of small objects (only few centimetres) it is used a macro lens and the focus stacking, a photographic technique that consists in capturing a stack of images at different focus planes for each camera pose so that is possible to obtain a final image with a higher depth of field. The acquisition with focus stacking technique has been finally validated with an acquisition with laser triangulation scanner Minolta that demonstrates the validity compatible with the allowable error in relation to the expected precision.

  3. Development of a multichannel hyperspectral imaging probe for food property and quality assessment

    NASA Astrophysics Data System (ADS)

    Huang, Yuping; Lu, Renfu; Chen, Kunjie

    2017-05-01

    This paper reports on the development, calibration and evaluation of a new multipurpose, multichannel hyperspectral imaging probe for property and quality assessment of food products. The new multichannel probe consists of a 910 μm fiber as a point light source and 30 light receiving fibers of three sizes (i.e., 50 μm, 105 μm and 200 μm) arranged in a special pattern to enhance signal acquisitions over the spatial distances of up to 36 mm. The multichannel probe allows simultaneous acquisition of 30 spatially-resolved reflectance spectra of food samples with either flat or curved surface over the spectral region of 550-1,650 nm. The measured reflectance spectra can be used for estimating the optical scattering and absorption properties of food samples, as well as for assessing the tissues of the samples at different depths. Several calibration procedures that are unique to this probe were carried out; they included linearity calibrations for each channel of the hyperspectral imaging system to ensure consistent linear responses of individual channels, and spectral response calibrations of individual channels for each fiber size group and between the three groups of different size fibers. Finally, applications of this new multichannel probe were demonstrated through the optical property measurement of liquid model samples and tomatoes of different maturity levels. The multichannel probe offers new capabilities for optical property measurement and quality detection of food and agricultural products.

  4. The Estimation of Precisions in the Planning of Uas Photogrammetric Surveys

    NASA Astrophysics Data System (ADS)

    Passoni, D.; Federici, B.; Ferrando, I.; Gagliolo, S.; Sguerso, D.

    2018-05-01

    The Unmanned Aerial System (UAS) is widely used in the photogrammetric surveys both of structures and of small areas. Geomatics focuses the attention on the metric quality of the final products of the survey, creating several 3D modelling applications from UAS images. As widely known, the quality of results derives from the quality of images acquisition phase, which needs an a priori estimation of the expected precisions. The planning phase is typically managed using dedicated tools, adapted from the traditional aerial-photogrammetric flight plan. But UAS flight has features completely different from the traditional one. Hence, the use of UAS for photogrammetric applications today requires a growth in knowledge in planning. The basic idea of this research is to provide a drone photogrammetric flight planning tools considering the required metric precisions, given a priori the classical parameters of a photogrammetric planning: flight altitude, overlaps and geometric parameters of the camera. The created "office suite" allows a realistic planning of a photogrammetric survey, starting from an approximate knowledge of the Digital Surface Model (DSM), and the effective attitude parameters, changing along the route. The planning products are the overlapping of the images, the Ground Sample Distance (GSD) and the precision on each pixel taking into account the real geometry. The different tested procedures, the obtained results and the solution proposed for the a priori estimates of the precisions in the particular case of UAS surveys are here reported.

  5. Volume measurement of cryogenic deuterium pellets by Bayesian analysis of single shadowgraphy images

    NASA Astrophysics Data System (ADS)

    Szepesi, T.; Kálvin, S.; Kocsis, G.; Lang, P. T.; Wittmann, C.

    2008-03-01

    In situ commissioning of the Blower-gun injector for launching cryogenic deuterium pellets at ASDEX Upgrade tokamak was performed. This injector is designed for high repetitive launch of small pellets for edge localised modes pacing experiments. During the investigation the final injection geometry was simulated with pellets passing to the torus through a 5.5m long guiding tube. For investigation of pellet quality at launch and after tube passage laser flash camera shadowgraphy diagnostic units before and after the tube were installed. As indicator of pellet quality we adopted the pellet mass represented by the volume of the main remaining pellet fragment. Since only two-dimensional (2D) shadow images were obtained, a reconstruction of the full three-dimensional pellet body had to be performed. For this the image was first converted into a 1-bit version prescribing an exact 2D contour. From this contour the expected value of the volume was calculated by Bayesian analysis taking into account the likely cylindrical shape of the pellet. Under appropriate injection conditions sound pellets with more than half of their nominal mass are detected after acceleration; the passage causes in average an additional loss of about 40% to the launched mass. Analyzing pellets arriving at tube exit allowed for deriving the injector's optimized operational conditions. For these more than 90% of the pellets were arriving with sound quality when operating in the frequency range 5-50Hz.

  6. [Object Separation from Medical X-Ray Images Based on ICA].

    PubMed

    Li, Yan; Yu, Chun-yu; Miao, Ya-jian; Fei, Bin; Zhuang, Feng-yun

    2015-03-01

    X-ray medical image can examine diseased tissue of patients and has important reference value for medical diagnosis. With the problems that traditional X-ray images have noise, poor level sense and blocked aliasing organs, this paper proposes a method for the introduction of multi-spectrum X-ray imaging and independent component analysis (ICA) algorithm to separate the target object. Firstly image de-noising preprocessing ensures the accuracy of target extraction based on independent component analysis and sparse code shrinkage. Then according to the main proportion of organ in the images, aliasing thickness matrix of each pixel was isolated. Finally independent component analysis obtains convergence matrix to reconstruct the target object with blind separation theory. In the ICA algorithm, it found that when the number is more than 40, the target objects separate successfully with the aid of subjective evaluation standard. And when the amplitudes of the scale are in the [25, 45] interval, the target images have high contrast and less distortion. The three-dimensional figure of Peak signal to noise ratio (PSNR) shows that the different convergence times and amplitudes have a greater influence on image quality. The contrast and edge information of experimental images achieve better effects with the convergence times 85 and amplitudes 35 in the ICA algorithm.

  7. Hyperspectral imaging simulation of object under sea-sky background

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui

    2016-10-01

    Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.

  8. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    PubMed

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  9. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  10. Comparative Analysis of Reconstructed Image Quality in a Simulated Chromotomographic Imager

    DTIC Science & Technology

    2014-03-01

    quality . This example uses five basic images a backlit bar chart with random intensity, 100 nm separation. A total of 54 initial target...compared for a variety of scenes. Reconstructed image quality is highly dependent on the initial target hypercube so a total of 54 initial target...COMPARATIVE ANALYSIS OF RECONSTRUCTED IMAGE QUALITY IN A SIMULATED CHROMOTOMOGRAPHIC IMAGER THESIS

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, H; UT Southwestern Medical Center, Dallas, TX; Hilts, M

    Purpose: To commission a multislice computed tomography (CT) scanner for fast and reliable readout of radiation therapy (RT) dose distributions using CT polymer gel dosimetry (PGD). Methods: Commissioning was performed for a 16-slice CT scanner using images acquired through a 1L cylinder filled with water. Additional images were collected using a single slice machine for comparison purposes. The variability in CT number associated with the anode heel effect was evaluated and used to define a new slice-by-slice background image subtraction technique. Image quality was assessed for the multislice system by comparing image noise and uniformity to that of the singlemore » slice machine. The consistency in CT number across slices acquired simultaneously using the multislice detector array was also evaluated. Finally, the variability in CT number due to increasing x-ray tube load was measured for the multislice scanner and compared to the tube load effects observed on the single slice machine. Results: Slice-by-slice background subtraction effectively removes the variability in CT number across images acquired simultaneously using the multislice scanner and is the recommended background subtraction method when using a multislice CT system. Image quality for the multislice machine was found to be comparable to that of the single slice scanner. Further study showed CT number was consistent across image slices acquired simultaneously using the multislice detector array for each detector configuration of the slice thickness examined. In addition, the multislice system was found to eliminate variations in CT number due to increasing x-ray tube load and reduce scanning time by a factor of 4 when compared to imaging a large volume using a single slice scanner. Conclusion: A multislice CT scanner has been commissioning for CT PGD, allowing images of an entire dose distribution to be acquired in a matter of minutes. Funding support provided by the Natural Sciences and Engineering Research Council of Canada (NSERC)« less

  12. Segmenting overlapping nano-objects in atomic force microscopy image

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko

    2018-01-01

    Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.

  13. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  14. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  15. OCT despeckling via weighted nuclear norm constrained non-local low-rank representation

    NASA Astrophysics Data System (ADS)

    Tang, Chang; Zheng, Xiao; Cao, Lijuan

    2017-10-01

    As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.

  16. Flatbed scanners as a source of imaging. Brightness assessment and additives determination in a nickel electroplating bath.

    PubMed

    Vidal, M; Amigo, J M; Bro, R; Ostra, M; Ubide, C; Zuriarrain, J

    2011-05-23

    Desktop flatbed scanners are very well-known devices that can provide digitized information of flat surfaces. They are practically present in most laboratories as a part of the computer support. Several quality levels can be found in the market, but all of them can be considered as tools with a high performance and low cost. The present paper shows how the information obtained with a scanner, from a flat surface, can be used with fine results for exploratory and quantitative purposes through image analysis. It provides cheap analytical measurements for assessment of quality parameters of coated metallic surfaces and monitoring of electrochemical coating bath lives. The samples used were steel sheets nickel-plated in an electrodeposition bath. The quality of the final deposit depends on the bath conditions and, especially, on the concentration of the additives in the bath. Some additives become degraded with the bath life and so is the quality of the plate finish. Analysis of the scanner images can be used to follow the evolution of the metal deposit and the concentration of additives in the bath. Principal component analysis (PCA) is applied to find significant differences in the coating of sheets, to find directions of maximum variability and to identify odd samples. The results found are favorably compared with those obtained by means of specular reflectance (SR), which is here used as a reference technique. Also the concentration of additives SPB and SA-1 along a nickel bath life can be followed using image data handled with algorithms such as partial least squares (PLS) regression and support vector regression (SVR). The quantitative results obtained with these and other algorithms are compared. All this opens new qualitative and quantitative possibilities to flatbed scanners. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  18. Can image enhancement allow radiation dose to be reduced whilst maintaining the perceived diagnostic image quality required for coronary angiography?

    PubMed Central

    Joshi, Anuja; Gislason-Lee, Amber J; Keeble, Claire; Sivananthan, Uduvil M

    2017-01-01

    Objective: The aim of this research was to quantify the reduction in radiation dose facilitated by image processing alone for percutaneous coronary intervention (PCI) patient angiograms, without reducing the perceived image quality required to confidently make a diagnosis. Methods: Incremental amounts of image noise were added to five PCI angiograms, simulating the angiogram as having been acquired at corresponding lower dose levels (10–89% dose reduction). 16 observers with relevant experience scored the image quality of these angiograms in 3 states—with no image processing and with 2 different modern image processing algorithms applied. These algorithms are used on state-of-the-art and previous generation cardiac interventional X-ray systems. Ordinal regression allowing for random effects and the delta method were used to quantify the dose reduction possible by the processing algorithms, for equivalent image quality scores. Results: Observers rated the quality of the images processed with the state-of-the-art and previous generation image processing with a 24.9% and 15.6% dose reduction, respectively, as equivalent in quality to the unenhanced images. The dose reduction facilitated by the state-of-the-art image processing relative to previous generation processing was 10.3%. Conclusion: Results demonstrate that statistically significant dose reduction can be facilitated with no loss in perceived image quality using modern image enhancement; the most recent processing algorithm was more effective in preserving image quality at lower doses. Advances in knowledge: Image enhancement was shown to maintain perceived image quality in coronary angiography at a reduced level of radiation dose using computer software to produce synthetic images from real angiograms simulating a reduction in dose. PMID:28124572

  19. Single-random-phase holographic encryption of images

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.

    2017-02-01

    In this paper, a method is proposed for encrypting an optical image onto a phase-only hologram, utilizing a single random phase mask as the private encryption key. The encryption process can be divided into 3 stages. First the source image to be encrypted is scaled in size, and pasted onto an arbitrary position in a larger global image. The remaining areas of the global image that are not occupied by the source image could be filled with randomly generated contents. As such, the global image as a whole is very different from the source image, but at the same time the visual quality of the source image is preserved. Second, a digital Fresnel hologram is generated from the new image, and converted into a phase-only hologram based on bi-directional error diffusion. In the final stage, a fixed random phase mask is added to the phase-only hologram as the private encryption key. In the decryption process, the global image together with the source image it contained, can be reconstructed from the phase-only hologram if it is overlaid with the correct decryption key. The proposed method is highly resistant to different forms of Plain-Text-Attacks, which are commonly used to deduce the encryption key in existing holographic encryption process. In addition, both the encryption and the decryption processes are simple and easy to implement.

  20. Intelligent identification of remnant ridge edges in region west of Yongxing Island, South China Sea

    NASA Astrophysics Data System (ADS)

    Wang, Weiwei; Guo, Jing; Cai, Guanqiang; Wang, Dawei

    2018-02-01

    Edge detection enables identification of geomorphologic unit boundaries and thus assists with geomorphical mapping. In this paper, an intelligent edge identification method is proposed and image processing techniques are applied to multi-beam bathymetry data. To accomplish this, a color image is generated by the bathymetry, and a weighted method is used to convert the color image to a gray image. As the quality of the image has a significant influence on edge detection, different filter methods are applied to the gray image for de-noising. The peak signal-to-noise ratio and mean square error are calculated to evaluate which filter method is most appropriate for depth image filtering and the edge is subsequently detected using an image binarization method. Traditional image binarization methods cannot manage the complicated uneven seafloor, and therefore a binarization method is proposed that is based on the difference between image pixel values; the appropriate threshold for image binarization is estimated according to the probability distribution of pixel value differences between two adjacent pixels in horizontal and vertical directions, respectively. Finally, an eight-neighborhood frame is adopted to thin the binary image, connect the intermittent edge, and implement contour extraction. Experimental results show that the method described here can recognize the main boundaries of geomorphologic units. In addition, the proposed automatic edge identification method avoids use of subjective judgment, and reduces time and labor costs.

  1. The influence of body mass index, age, implants, and dental restorations on image quality of cone beam computed tomography.

    PubMed

    Ritter, Lutz; Mischkowski, Robert A; Neugebauer, Jörg; Dreiseidler, Timo; Scheer, Martin; Keeve, Erwin; Zöller, Joachim E

    2009-09-01

    The aim was to determine the influence of patient age, gender, body mass index (BMI), amount of dental restorations, and implants on image quality of cone-beam computerized tomography (CBCT). Fifty CBCT scans of a preretail version of Galileos (Sirona, Germany) were investigated retrospectively by 4 observers regarding image quality of 6 anatomic structures, pathologic findings detection, subjective exposure quality, and artifacts. Patient age, BMI, gender, amount of dental restorations, and implants were recorded and statistically tested for correlations to image quality. A negative effect on image quality was found statistically significantly correlated with age and the amount of dental restorations. None of the investigated image features were garbled by any of the investigated influence factors. Age and the amount of dental restorations appear to have a negative impact on CBCT image quality, whereas gender and BMI do not. Image quality of mental foramen, mandibular canal, and nasal floor are affected negatively by age but not by the amount of dental restorations. Further studies are required to elucidate influence factors on CBCT image quality.

  2. Image Quality Modeling and Characterization of Nyquist Sampled Framing Systems with Operational Considerations for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Garma, Rey Jan D.

    The trade between detector and optics performance is often conveyed through the Q metric, which is defined as the ratio of detector sampling frequency and optical cutoff frequency. Historically sensors have operated at Q ≈ 1, which introduces aliasing but increases the system modulation transfer function (MTF) and signal-to-noise ratio (SNR). Though mathematically suboptimal, such designs have been operationally ideal when considering system parameters such as pointing stability and detector performance. Substantial advances in read noise and quantum efficiency of modern detectors may compensate for the negative aspects associated with balancing detector/optics performance, presenting an opportunity to revisit the potential for implementing Nyquist-sampled (Q ≈ 2) sensors. A digital image chain simulation is developed and validated against a laboratory testbed using objective and subjective assessments. Objective assessments are accomplished by comparison of the modeled MTF and measurements from slant-edge photographs. Subjective assessments are carried out by performing a psychophysical study where subjects are asked to rate simulation and testbed imagery against a DeltaNIIRS scale with the aid of a marker set. Using the validated model, additional test cases are simulated to study the effects of increased detector sampling on image quality with operational considerations. First, a factorial experiment using Q-sampling, pointing stability, integration time, and detector performance is conducted to measure the main effects and interactions of each on the response variable, DeltaNIIRS. To assess the fidelity of current models, variants of the General Image Quality Equation (GIQE) are evaluated against subject-provided ratings and two modified GIQE versions are proposed. Finally, using the validated simulation and modified IQE, trades are conducted to ascertain the feasibility of implementing Q ≈ 2 designs in future systems.

  3. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system

    PubMed Central

    Wood, T J; Beavis, A W; Saunderson, J R

    2013-01-01

    Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362

  4. Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs

    NASA Astrophysics Data System (ADS)

    Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.

    2017-08-01

    In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.

  5. Coaxial fundus camera for opthalmology

    NASA Astrophysics Data System (ADS)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  6. Highly accelerated single breath-hold noncontrast thoracic MRA: evaluation in a clinical population.

    PubMed

    Lim, Ruth P; Winchester, Priscilla A; Bruno, Mary T; Xu, Jian; Storey, Pippa; McGorty, Kellyanne; Sodickson, Daniel K; Srichai, Monvadi B

    2013-03-01

    The objective of this study was to evaluate the performance of a highly accelerated breath-hold 3-dimensional noncontrast-enhanced steady-state free precession thoracic magnetic resonance angiography (NC-MRA) technique in a clinical population, including assessment of image quality, aortic dimensions, and aortic pathology, compared with electrocardiographically gated gadolinium-enhanced MRA (Gd-MRA). After approval from the institution board and informed consent were obtained, 30 patients (22 men; mean age, 53.4 years) with known or suspected aortic pathology were imaged with NC-MRA followed by Gd-MRA at a single examination at 1.5 T. Images were made anonymous and reviewed by 2 readers for aortic pathology and diagnostic confidence on a 5-point scale (1, worst; 5, best) on a patient basis. Image quality and artifacts were also evaluated in 10 vascular segments: aortic annulus, sinuses of Valsalva, sinotubular junction, ascending aorta, aortic arch, descending aorta, diaphragmatic aorta, great vessel origins, and the left main and right coronary artery origins. Finally, aortic dimensions were measured in each of the 7 aortic segments. The Wilcoxon signed rank test was used to compare diagnostic confidence, image quality, and artifact scores between NC-MRA and Gd-MRA. The paired Student t test and Bland-Altman analysis were used for comparison of aortic dimensions. All patients completed NC-MRA and Gd-MRA successfully. Vascular pathologic findings were concordant with Gd-MRA in 29 of 30 (96.7%) patients and 28 of 30 (93.3%) patients for readers 1 and 2, respectively, with high diagnostic confidence (mean [SD], 4.35 [0.77]) not significantly different from Gd-MRA (4.38 [0.64]; P = 0.74). The image quality and artifact scores were comparable with Gd-MRA in most vascular segments. Notable differences were observed at the ascending aorta, where Gd-MRA had superior image quality (4.13 [0.73]) compared with NC-MRA (3.80 [0.88]; P = 0.028), and at the coronary artery origins where NC-MRA was considered superior (NC-MRA vs Gd-MRA, 3.38 [1.47] vs 2.78 [1.21] for the left main artery and NC-MRA vs Gd-MRA, 3.55 [1.40] vs 2.32 [1.16] for the right coronary artery; P < 0.05, both comparisons). The aortic dimensions were comparable, with the only significant difference observed at the ascending aorta, where NC-MRA dimension (4.05 [0.76]) was less than 1 mm smaller than that of Gd-MRA (4.12 [0.7]; P = 0.043). Breath-hold NC-MRA of the thoracic aorta yields good image quality, comparable to Gd-MRA, with high accuracy for aortic dimension and pathology. It can be considered as an alternative to Gd-MRA in patients with relative contraindications to gadolinium contrast or problems with intravenous access.

  7. On pictures and stuff: image quality and material appearance

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2014-02-01

    Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves. When we look at an image we are able to perceive both the properties of the image and the properties of the objects represented by the image. Research on image quality has typically focused improving image properties (resolution, dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual representations. In this paper we describe a series of experiments that investigate how well images of different quality convey information about the properties of the objects they represent. In the experiments we focus on the effects that two image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We found that different experimental methods produced differing results. Specifically, when the stimulus images were presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus images were presented sequentially, observers were able to disregard the image plane properties and more accurately match the gloss of the objects represented by the different quality images. These findings suggest that in understanding image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information represented by that medium.

  8. Defining Quality in Cardiovascular Imaging: A Scientific Statement From the American Heart Association.

    PubMed

    Shaw, Leslee J; Blankstein, Ron; Jacobs, Jill E; Leipsic, Jonathon A; Kwong, Raymond Y; Taqueti, Viviany R; Beanlands, Rob S B; Mieres, Jennifer H; Flamm, Scott D; Gerber, Thomas C; Spertus, John; Di Carli, Marcelo F

    2017-12-01

    The aims of the current statement are to refine the definition of quality in cardiovascular imaging and to propose novel methodological approaches to inform the demonstration of quality in imaging in future clinical trials and registries. We propose defining quality in cardiovascular imaging using an analytical framework put forth by the Institute of Medicine whereby quality was defined as testing being safe, effective, patient-centered, timely, equitable, and efficient. The implications of each of these components of quality health care are as essential for cardiovascular imaging as they are for other areas within health care. Our proposed statement may serve as the foundation for integrating these quality indicators into establishing designations of quality laboratory practices and developing standards for value-based payment reform for imaging services. We also include recommendations for future clinical research to fulfill quality aims within cardiovascular imaging, including clinical hypotheses of improving patient outcomes, the importance of health status as an end point, and deferred testing options. Future research should evolve to define novel methods optimized for the role of cardiovascular imaging for detecting disease and guiding treatment and to demonstrate the role of cardiovascular imaging in facilitating healthcare quality. © 2017 American Heart Association, Inc.

  9. Objective quality assessment for multiexposure multifocus image fusion.

    PubMed

    Hassen, Rania; Wang, Zhou; Salama, Magdy M A

    2015-09-01

    There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.

  10. 76 FR 8753 - Final Information Quality Guidelines Policy

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-15

    ... DEPARTMENT OF HOMELAND SECURITY Final Information Quality Guidelines Policy AGENCY: Department of Homeland Security. ACTION: Notice and request for public comment on Final Information Quality Guidelines. SUMMARY: These guidelines should be used to ensure and maximize the quality of disseminated information...

  11. Perceptual quality prediction on authentically distorted images using a bag of features approach

    PubMed Central

    Ghadiyaram, Deepti; Bovik, Alan C.

    2017-01-01

    Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417

  12. Automated optical testing of LWIR objective lenses using focal plane array sensors

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik; Domagalski, Christian; Peter, Frank; Heinisch, Josef; Dumitrescu, Eugen

    2012-10-01

    The image quality of today's state-of-the-art IR objective lenses is constantly improving while at the same time the market for thermography and vision grows strongly. Because of increasing demands on the quality of IR optics and increasing production volumes, the standards for image quality testing increase and tests need to be performed in shorter time. Most high-precision MTF testing equipment for the IR spectral bands in use today relies on the scanning slit method that scans a 1D detector over a pattern in the image generated by the lens under test, followed by image analysis to extract performance parameters. The disadvantages of this approach are that it is relatively slow, it requires highly trained operators for aligning the sample and the number of parameters that can be extracted is limited. In this paper we present lessons learned from the R and D process on using focal plane array (FPA) sensors for testing of long-wave IR (LWIR, 8-12 m) optics. Factors that need to be taken into account when switching from scanning slit to FPAs are e.g.: the thermal background from the environment, the low scene contrast in the LWIR, the need for advanced image processing algorithms to pre-process camera images for analysis and camera artifacts. Finally, we discuss 2 measurement systems for LWIR lens characterization that we recently developed with different target applications: 1) A fully automated system suitable for production testing and metrology that uses uncooled microbolometer cameras to automatically measure MTF (on-axis and at several o-axis positions) and parameters like EFL, FFL, autofocus curves, image plane tilt, etc. for LWIR objectives with an EFL between 1 and 12mm. The measurement cycle time for one sample is typically between 6 and 8s. 2) A high-precision research-grade system using again an uncooled LWIR camera as detector, that is very simple to align and operate. A wide range of lens parameters (MTF, EFL, astigmatism, distortion, etc.) can be easily and accurately measured with this system.

  13. PET/CT: underlying physics, instrumentation, and advances.

    PubMed

    Torres Espallardo, I

    Since it was first introduced, the main goal of PET/CT has been to provide both PET and CT images with high clinical quality and to present them to radiologists and specialists in nuclear medicine as a fused, perfectly aligned image. The use of fused PET and CT images quickly became routine in clinical practice, showing the great potential of these hybrid scanners. Thanks to this success, manufacturers have gone beyond considering CT as a mere attenuation corrector for PET, concentrating instead on design high performance PET and CT scanners with more interesting features. Since the first commercial PET/CT scanner became available in 2001, both the PET component and the CT component have improved immensely. In the case of PET, faster scintillation crystals with high stopping power such as LYSO crystals have enabled more sensitive devices to be built, making it possible to reduce the number of undesired coincidence events and to use time of flight (TOF) techniques. All these advances have improved lesion detection, especially in situations with very noisy backgrounds. Iterative reconstruction methods, together with the corrections carried out during the reconstruction and the use of the point-spread function, have improved image quality. In parallel, CT instrumentation has also improved significantly, and 64- and 128-row detectors have been incorporated into the most modern PET/CT scanners. This makes it possible to obtain high quality diagnostic anatomic images in a few seconds that both enable the correction of PET attenuation and provide information for diagnosis. Furthermore, nowadays nearly all PET/CT scanners have a system that modulates the dose of radiation that the patient is exposed to in the CT study in function of the region scanned. This article reviews the underlying physics of PET and CT imaging separately, describes the changes in the instrumentation and standard protocols in a combined PET/CT system, and finally points out the most important advances in this hybrid imaging modality. Copyright © 2016 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  14. Guidance for Efficient Small Animal Imaging Quality Control.

    PubMed

    Osborne, Dustin R; Kuntner, Claudia; Berr, Stuart; Stout, David

    2017-08-01

    Routine quality control is a critical aspect of properly maintaining high-performance small animal imaging instrumentation. A robust quality control program helps produce more reliable data both for academic purposes and as proof of system performance for contract imaging work. For preclinical imaging laboratories, the combination of costs and available resources often limits their ability to produce efficient and effective quality control programs. This work presents a series of simplified quality control procedures that are accessible to a wide range of preclinical imaging laboratories. Our intent is to provide minimum guidelines for routine quality control that can assist preclinical imaging specialists in setting up an appropriate quality control program for their facility.

  15. Comparison of native high-resolution 3D and contrast-enhanced MR angiography for assessing the thoracic aorta.

    PubMed

    von Knobelsdorff-Brenkenhoff, Florian; Gruettner, Henriette; Trauzeddel, Ralf F; Greiser, Andreas; Schulz-Menger, Jeanette

    2014-06-01

    To omit risks of contrast agent administration, native magnetic resonance angiography (MRA) is desired for assessing the thoracic aorta. The aim was to evaluate a native steady-state free precession (SSFP) three-dimensional (3D) MRA in comparison with contrast-enhanced MRA as the gold standard. Seventy-six prospective patients with known or suspicion of thoracic aortic disease underwent MRA at 1.5 T using (i) native 3D SSFP MRA with ECG and navigator gating and high isotropic spatial resolution (1.3 × 1.3 × 1.3 mm(3)) and (ii) conventional contrast-enhanced ECG-gated gradient-echo 3D MRA (1.3 × 0.8 × 1.8 mm(3)). Datasets were compared at nine aortic levels regarding image quality (score 0-3: 0 = poor, 3 = excellent) and aortic diameters, as well as observer dependency and final diagnosis. Statistical tests included paired t-test, correlation analysis, and Bland-Altman analysis. Native 3D MRA was acquired successfully in 70 of 76 subjects (mean acquisition time 8.6 ± 2.7 min), while irregular breathing excluded 6 of 76 subjects. Aortic diameters agreed close between both methods at all aortic levels (r = 0.99; bias ± SD -0.12 ± 1.2 mm) with low intra- and inter-observer dependency (intraclass correlation coefficient 0.99). Native MRA studies resulted in the same final diagnosis as the contrast-enhanced MRA. The mean image quality score was superior with native compared with contrast-enhanced MRA (2.4 ± 0.6 vs. 1.6 ± 0.5; P < 0.001). Accuracy of aortic size measurements, certainty in defining the diagnosis and benefits in image quality at the aortic root, underscore the use of the tested high-resolution native 3D SSFP MRA as an appropriate alternative to contrast-enhanced MRA to assess the thoracic aorta. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.

  16. Task-based measures of image quality and their relation to radiation dose and patient risk

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.

    2015-01-01

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960

  17. No-reference multiscale blur detection tool for content based image retrieval

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark

    2014-06-01

    In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.

  18. Determinants of image quality of rotational angiography for on-line assessment of frame geometry after transcatheter aortic valve implantation.

    PubMed

    Rodríguez-Olivares, Ramón; El Faquir, Nahid; Rahhab, Zouhair; Maugenest, Anne-Marie; Van Mieghem, Nicolas M; Schultz, Carl; Lauritsch, Guenter; de Jaegere, Peter P T

    2016-07-01

    To study the determinants of image quality of rotational angiography using dedicated research prototype software for motion compensation without rapid ventricular pacing after the implantation of four commercially available catheter-based valves. Prospective observational study including 179 consecutive patients who underwent transcatheter aortic valve implantation (TAVI) with either the Medtronic CoreValve (MCS), Edward-SAPIEN Valve (ESV), Boston Sadra Lotus (BSL) or Saint-Jude Portico Valve (SJP) in whom rotational angiography (R-angio) with motion compensation 3D image reconstruction was performed. Image quality was evaluated from grade 1 (excellent image quality) to grade 5 (strongly degraded). Distinction was made between good (grades 1, 2) and poor image quality (grades 3-5). Clinical (gender, body mass index, Agatston score, heart rate and rhythm, artifacts), procedural (valve type) and technical variables (isocentricity) were related with the image quality assessment. Image quality was good in 128 (72 %) and poor in 51 (28 %) patients. By univariable analysis only valve type (BSL) and the presence of an artefact negatively affected image quality. By multivariate analysis (in which BMI was forced into the model) BSL valve (Odds 3.5, 95 % CI [1.3-9.6], p = 0.02), presence of an artifact (Odds 2.5, 95 % CI [1.2-5.4], p = 0.02) and BMI (Odds 1.1, 95 % CI [1.0-1.2], p = 0.04) were independent predictors of poor image quality. Rotational angiography with motion compensation 3D image reconstruction using a dedicated research prototype software offers good image quality for the evaluation of frame geometry after TAVI in the majority of patients. Valve type, presence of artifacts and higher BMI negatively affect image quality.

  19. Restoration of Motion-Blurred Image Based on Border Deformation Detection: A Traffic Sign Restoration Model

    PubMed Central

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing

    2015-01-01

    Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently. PMID:25849350

  20. Restoration of motion-blurred image based on border deformation detection: a traffic sign restoration model.

    PubMed

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing

    2015-01-01

    Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.

  1. Synthesis of nanostructured barium phosphate and its application in micro-computed tomography of mouse brain vessels in ex vivo

    NASA Astrophysics Data System (ADS)

    Zhu, Bangshang; Yuan, Falei; Yuan, Xiaoya; Bo, Yang; Wang, Yongting; Yang, Guo-Yuan; Drummen, Gregor P. C.; Zhu, Xinyuan

    2014-02-01

    Micro-computed tomography (micro-CT) is a powerful tool for visualizing the vascular systems of tissues, organs, or entire small animals. Vascular contrast agents play a vital role in micro-CT imaging in order to obtain clear and high-quality images. In this study, a new kind of nanostructured barium phosphate was fabricated and used as a contrast agent for ex vivo micro-CT imaging of blood vessels in the mouse brain. Nanostructured barium phosphate was synthesized through a simple wet precipitation method using Ba(NO3)2, and (NH4)2HPO4 as starting materials. The physiochemical properties of barium phosphate were characterized by scanning electron microscopy, transmission electron microscopy, X-ray diffraction, Fourier transform infrared spectroscopy, and thermal analysis. Furthermore, the impact of the produced nanostructures on cell viability was evaluated via the MTT assay, which generally showed low to moderate cytotoxicity. Finally, the animal test images demonstrated that the use of nanostructured barium phosphate as a contrast agent in Micro-CT imaging produced sharp images with excellent contrast. Both major vessels and the microvasculature were clearly observable in the imaged mouse brain. Overall, the results indicate that nanostructured barium phosphate is a potential and useful vascular contrast agent for micro-CT imaging.

  2. Recent Developments and Applications of Radiation/Detection Technology in Tsinghua University

    NASA Astrophysics Data System (ADS)

    Kang, Ke-Jun

    2010-03-01

    Nuclear technology applications have been very important research fields in Tsinghua University (THU) for more than 50 years. This paper describes two major directions and related projects running in THU concerning nuclear technology applications for radiation imaging and nuclear technology applications for astrophysics. Radiation imaging is a significant application of nuclear technology for all kinds of real world needs including security inspections, anti-smuggling operations, and medicine. The current improved imaging systems give much higher quality radiation images. THU has produced accelerating tubes for both industrial and medical accelerators with energy levels ranging from 2.5˜20Mev. Detectors have been produced for medical and industrial imaging as well as for high energy physics experiments such as the MRPC with fast time and position resolutions. DR and CT systems for radiation imaging systems have been continuously improved with new system designs and improved algorithms for image reconstruction and processing. Two important new key initiatives are the dual-energy radiography and dual-energy CT systems. Dual-energy CT imaging improves material discrimination by providing both the electron density and the atomic number distribution of scanned objects. Finally, this paper also introduces recent developments related to the hard X-ray modulation telescope (HXMT) provided by THU.

  3. Training Midwives to Perform Basic Obstetric Point-of-Care Ultrasound in Rural Areas Using a Tablet Platform and Mobile Phone Transmission Technology-A WFUMB COE Project.

    PubMed

    Vinayak, Sudhir; Sande, Joyce; Nisenbaum, Harvey; Nolsøe, Christian Pállson

    2017-10-01

    Point-of-care ultrasound (POCUS) has become a topical subject and can be applied in a variety of ways with differing outcomes. The cost of all diagnostic procedures including obstetric ultrasound examinations is a major factor in the developing world and POCUS is only useful if it can be equated to good outcomes at a lower cost than a routine obstetric examination. The aim of this study was to assess a number of processes including accuracy of images and reports generated by midwives, performance of a tablet-sized ultrasound scanner, training of midwives to complete ultrasounds, teleradiology solution transmissions of images via internet, review of images by a radiologist, communication between midwife and radiologist, use of this technique to identify high-risk patients and improvement of the education and teleradiology model components. The midwives had no previous experience in ultrasound. They were stationed in rural locations where POCUS was available for the first time. After scanning the patients, an interim report was generated by the midwives and sent electronically together with all images to the main hospital for validation. Unique software was used to send lossless images by mobile phone using a modem. Transmission times were short and quality of images transmitted was excellent. All reports were validated by two experienced radiologists in our department and returned to the centers using the same transmission software. The transmission times, quality of scans, quality of reports and other parameters were recorded and monitored. Analysis showed excellent correlation between provisional and validated reports. Reporting accuracy of scans performed by the midwives was 99.63%. Overall flow turnaround time (from patient presentation to validated report) was initially 35 min but reduced to 25 min. The unique mobile phone transmission was faultless and there was no degradation of image quality. We found excellent correlation between final outcomes of the pregnancies and diagnoses on the basis of reports generated by the midwives. Only 1 discrepancy was found in the midwives' reports. Scan results versus actual outcomes revealed 2 discrepancies in the 20 patients identified as high risk. In conclusion, we found that it is valuable to train midwives in POCUS to use an ultrasound tablet device and transmit images and reports via the internet to radiologists for review of accuracy. This focus on the identification of high-risk patients can be valuable in a remote healthcare facility. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Blending online techniques with traditional face to face teaching methods to deliver final year undergraduate radiology learning content.

    PubMed

    Howlett, David; Vincent, Tim; Watson, Gillian; Owens, Emma; Webb, Richard; Gainsborough, Nicola; Fairclough, Jil; Taylor, Nick; Miles, Ken; Cohen, Jon; Vincent, Richard

    2011-06-01

    To review the initial experience of blending a variety of online educational techniques with traditional face to face or contact-based teaching methods to deliver final year undergraduate radiology content at a UK Medical School. The Brighton and Sussex Medical School opened in 2003 and offers a 5-year undergraduate programme, with the final 5 spent in several regional centres. Year 5 involves several core clinical specialities with onsite radiology teaching provided at regional centres in the form of small-group tutorials, imaging seminars and also a one-day course. An online educational module was introduced in 2007 to facilitate equitable delivery of the year 5 curriculum between the regional centres and to support students on placement. This module had a strong radiological emphasis, with a combination of imaging integrated into clinical cases to reflect everyday practice and also dedicated radiology cases. For the second cohort of year 5 students in 2008 two additional online media-rich initiatives were introduced, to complement the online module, comprising imaging tutorials and an online case discussion room. In the first year for the 2007/2008 cohort, 490 cases were written, edited and delivered via the Medical School managed learning environment as part of the online module. 253 cases contained a form of image media, of which 195 cases had a radiological component with a total of 325 radiology images. Important aspects of radiology practice (e.g. consent, patient safety, contrast toxicity, ionising radiation) were also covered. There were 274,000 student hits on cases the first year, with students completing a mean of 169 cases each. High levels of student satisfaction were recorded in relation to the online module and also additional online radiology teaching initiatives. Online educational techniques can be effectively blended with other forms of teaching to allow successful undergraduate delivery of radiology. Efficient IT links and good image quality are essential ingredients for successful student/clinician engagement. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  5. Color pictorial serpentine halftone for secure embedded data

    NASA Astrophysics Data System (ADS)

    Curry, Douglas N.

    1998-04-01

    This paper introduces a new rotatable glyph shape for trusted printing applications that has excellent image rendering, data storage and counterfeit deterrence properties. Referred to as a serpentine because it tiles into a meandering line screen, it can produce high quality images independent of its ability to embed data. The hafltone cell is constructed with hyperbolic curves to enhance its dynamic range, and generates low distortion because of rotational tone invariance with its neighbors. An extension to the process allows the data to be formatted into human readable text patterns, viewable with a magnifying glass, and therefore not requiring input scanning. The resultant embedded halftone patterns can be recognized as simple numbers (0 - 9) or alphanumerics (a - z). The pattern intensity can be offset from the surrounding image field intensity, producing a watermarking effect. We have been able to embed words such as 'original' or license numbers into the background halftone pattern of images which can be readily observed in the original image, and which conveniently disappear upon copying. We have also embedded data blocks with self-clocking codes and error correction data which are machine-readable. Finally, we have successfully printed full color images with both the embedded data and text, simulating a trusted printing application.

  6. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    PubMed Central

    Sappa, Angel D.; Carvajal, Juan A.; Aguilera, Cristhian A.; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X.

    2016-01-01

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938

  7. A survey of MRI-based medical image analysis for brain tumor studies

    NASA Astrophysics Data System (ADS)

    Bauer, Stefan; Wiest, Roland; Nolte, Lutz-P.; Reyes, Mauricio

    2013-07-01

    MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.

  8. [Application of single-band brightness variance ratio to the interference dissociation of cloud for satellite data].

    PubMed

    Qu, Wei-ping; Liu, Wen-qing; Liu, Jian-guo; Lu, Yi-huai; Zhu, Jun; Qin, Min; Liu, Cheng

    2006-11-01

    In satellite remote-sensing detection, cloud as an interference plays a negative role in data retrieval. How to discern the cloud fields with high fidelity thus comes as a need to the following research. A new method rooting in atmospheric radiation characteristics of cloud layer, in the present paper, presents a sort of solution where single-band brightness variance ratio is used to detect the relative intensity of cloud clutter so as to delineate cloud field rapidly and exactly, and the formulae of brightness variance ratio of satellite image, image reflectance variance ratio, and brightness temperature variance ratio of thermal infrared image are also given to enable cloud elimination to produce data free from cloud interference. According to the variance of the penetrating capability for different spectra bands, an objective evaluation is done on cloud penetration of them with the factors that influence penetration effect. Finally, a multi-band data fusion task is completed using the image data of infrared penetration from cirrus nothus. Image data reconstruction is of good quality and exactitude to show the real data of visible band covered by cloud fields. Statistics indicates the consistency of waveband relativity with image data after the data fusion.

  9. Comparison of amplitude-decorrelation, speckle-variance and phase-variance OCT angiography methods for imaging the human retina and choroid

    PubMed Central

    Gorczynska, Iwona; Migacz, Justin V.; Zawadzki, Robert J.; Capps, Arlie G.; Werner, John S.

    2016-01-01

    We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598

  10. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study.

    PubMed

    Sappa, Angel D; Carvajal, Juan A; Aguilera, Cristhian A; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X

    2016-06-10

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).

  11. [Algorithm of locally adaptive region growing based on multi-template matching applied to automated detection of hemorrhages].

    PubMed

    Gao, Wei-Wei; Shen, Jian-Xin; Wang, Yu-Liang; Liang, Chun; Zuo, Jing

    2013-02-01

    In order to automatically detect hemorrhages in fundus images, and develop an automated diabetic retinopathy screening system, a novel algorithm named locally adaptive region growing based on multi-template matching was established and studied. Firstly, spectral signature of major anatomical structures in fundus was studied, so that the right channel among RGB channels could be selected for different segmentation objects. Secondly, the fundus image was preprocessed by means of HSV brightness correction and contrast limited adaptive histogram equalization (CLAHE). Then, seeds of region growing were founded out by removing optic disc and vessel from the resulting image of normalized cross-correlation (NCC) template matching on the previous preprocessed image with several templates. Finally, locally adaptive region growing segmentation was used to find out the exact contours of hemorrhages, and the automated detection of the lesions was accomplished. The approach was tested on 90 different resolution fundus images with variable color, brightness and quality. Results suggest that the approach could fast and effectively detect hemorrhages in fundus images, and it is stable and robust. As a result, the approach can meet the clinical demands.

  12. Digital processing of radiographic images from PACS to publishing.

    PubMed

    Christian, M E; Davidson, H C; Wiggins, R H; Berges, G; Cannon, G; Jackson, G; Chapman, B; Harnsberger, H R

    2001-03-01

    Several studies have addressed the implications of filmless radiologic imaging on telemedicine, diagnostic ability, and electronic teaching files. However, many publishers still require authors to submit hard-copy images for publication of articles and textbooks. This study compares the quality digital images directly exported from picture archive and communications systems (PACS) to images digitized from radiographic film. The authors evaluated the quality of publication-grade glossy photographs produced from digital radiographic images using 3 different methods: (1) film images digitized using a desktop scanner and then printed, (2) digital images obtained directly from PACS then printed, and (3) digital images obtained from PACS and processed to improve sharpness prior to printing. Twenty images were printed using each of the 3 different methods and rated for quality by 7 radiologists. The results were analyzed for statistically significant differences among the image sets. Subjective evaluations of the filmless images found them to be of equal or better quality than the digitized images. Direct electronic transfer of PACS images reduces the number of steps involved in creating publication-quality images as well as providing the means to produce high-quality radiographic images in a digital environment.

  13. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators

    PubMed Central

    Bai, Xiangzhi

    2015-01-01

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229

  14. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.

    PubMed

    Bai, Xiangzhi

    2015-07-15

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.

  15. Human visual system consistent quality assessment for remote sensing image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen

    2015-07-01

    Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.

  16. Low-cost oblique illumination: an image quality assessment.

    PubMed

    Ruiz-Santaquiteria, Jesus; Espinosa-Aranda, Jose Luis; Deniz, Oscar; Sanchez, Carlos; Borrego-Ramos, Maria; Blanco, Saul; Cristobal, Gabriel; Bueno, Gloria

    2018-01-01

    We study the effectiveness of several low-cost oblique illumination filters to improve overall image quality, in comparison with standard bright field imaging. For this purpose, a dataset composed of 3360 diatom images belonging to 21 taxa was acquired. Subjective and objective image quality assessments were done. The subjective evaluation was performed by a group of diatom experts by psychophysical test where resolution, focus, and contrast were assessed. Moreover, some objective nonreference image quality metrics were applied to the same image dataset to complete the study, together with the calculation of several texture features to analyze the effect of these filters in terms of textural properties. Both image quality evaluation methods, subjective and objective, showed better results for images acquired using these illumination filters in comparison with the no filtered image. These promising results confirm that this kind of illumination filters can be a practical way to improve the image quality, thanks to the simple and low cost of the design and manufacturing process. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  17. Quantifying the quality of medical x-ray images: An evaluation based on normal anatomy for lumbar spine and chest radiography

    NASA Astrophysics Data System (ADS)

    Tingberg, Anders Martin

    Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.

  18. Fast Physically Correct Refocusing for Sparse Light Fields Using Block-Based Multi-Rate View Interpolation.

    PubMed

    Huang, Chao-Tsung; Wang, Yu-Wen; Huang, Li-Ren; Chin, Jui; Chen, Liang-Gee

    2017-02-01

    Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- 34× speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.

  19. WELDSMART: A vision-based expert system for quality control

    NASA Technical Reports Server (NTRS)

    Andersen, Kristinn; Barnett, Robert Joel; Springfield, James F.; Cook, George E.

    1992-01-01

    This work was aimed at exploring means for utilizing computer technology in quality inspection and evaluation. Inspection of metallic welds was selected as the main application for this development and primary emphasis was placed on visual inspection, as opposed to other inspection methods, such as radiographic techniques. Emphasis was placed on methodologies with the potential for use in real-time quality control systems. Because quality evaluation is somewhat subjective, despite various efforts to classify discontinuities and standardize inspection methods, the task of using a computer for both inspection and evaluation was not trivial. The work started out with a review of the various inspection techniques that are used for quality control in welding. Among other observations from this review was the finding that most weld defects result in abnormalities that may be seen by visual inspection. This supports the approach of emphasizing visual inspection for this work. Quality control consists of two phases: (1) identification of weld discontinuities (some of which may be severe enough to be classified as defects), and (2) assessment or evaluation of the weld based on the observed discontinuities. Usually the latter phase results in a pass/fail judgement for the inspected piece. It is the conclusion of this work that the first of the above tasks, identification of discontinuities, is the most challenging one. It calls for sophisticated image processing and image analysis techniques, and frequently ad hoc methods have to be developed to identify specific features in the weld image. The difficulty of this task is generally not due to limited computing power. In most cases it was found that a modest personal computer or workstation could carry out most computations in a reasonably short time period. Rather, the algorithms and methods necessary for identifying weld discontinuities were in some cases limited. The fact that specific techniques were finally developed and successfully demosntrated to work illustrates that the general approach taken here appears to be promising for commercial development of computerized quality inspection systems. Inspection based on these techniques may be used to supplement or substitute more elaborate inspection methods, such as x-ray inspections.

  20. Development of a stationary chest tomosynthesis system using carbon nanotube x-ray source array

    NASA Astrophysics Data System (ADS)

    Shan, Jing

    X-ray imaging system has shown its usefulness for providing quick and easy access of imaging in both clinic settings and emergency situations. It greatly improves the workflow in hospitals. However, the conventional radiography systems, lacks 3D information in the images. The tissue overlapping issue in the 2D projection image result in low sensitivity and specificity. Both computed tomography and digital tomosynthesis, the two conventional 3D imaging modalities, requires a complex gantry to mechanically translate the x-ray source to various positions. Over the past decade, our research group has developed a carbon nanotube (CNT) based x-ray source technology. The CNT x-ray sources allows compacting multiple x-ray sources into a single x-ray tube. Each individual x-ray source in the source array can be electronically switched. This technology allows development of stationary tomographic imaging modalities without any complex mechanical gantries. The goal of this work is to develop a stationary digital chest tomosynthesis (s-DCT) system, and implement it for a clinical trial. The feasibility of s-DCT was investigated. It is found that the CNT source array can provide sufficient x-ray output for chest imaging. Phantom images have shown comparable image qualities as conventional DCT. The s-DBT system was then used to study the effects of source array configurations and tomosynthesis image quality, and the feasibility of a physiological gated s-DCT. Using physical measures for spatial resolution, the 2D source configuration was shown to have improved depth resolution and comparable in-plane resolution. The prospective gated tomosynthesis images have shown substantially reduction of image blur associated with lung motions. The system was also used to investigate the feasibility of using s-DCT as a diagnosis and monitoring tools for cystic fibrosis patients. A new scatter reduction methods for s-DCT was also studied. Finally, a s-DCT system was constructed by retrofitting the source array to a Carestream digital radiography system. The system passed the electrical and radiation safety tests, and was installed in Marsico Hall. The patient trial started in March of 2015, and the first patient was successfully imaged.

Top