Science.gov

Sample records for 3d-ct imaging processing

  1. Jaw tissues segmentation in dental 3D CT images using fuzzy-connectedness and morphological processing.

    PubMed

    Lloréns, Roberto; Naranjo, Valery; López, Fernando; Alcañiz, Mariano

    2012-11-01

    The success of oral surgery is subject to accurate advanced planning. In order to properly plan for dental surgery or a suitable implant placement, it is necessary an accurate segmentation of the jaw tissues: the teeth, the cortical bone, the trabecular core and over all, the inferior alveolar nerve. This manuscript presents a new automatic method that is based on fuzzy connectedness object extraction and mathematical morphology processing. The method uses computed tomography data to extract different views of the jaw: a pseudo-orthopantomographic view to estimate the path of the nerve and cross-sectional views to segment the jaw tissues. The method has been tested in a groundtruth set consisting of more than 9000 cross-sections from 20 different patients and has been evaluated using four similarity indicators (the Jaccard index, Dice's coefficient, point-to-point and point-to-curve distances), achieving promising results in all of them (0.726±0.031, 0.840±0.019, 0.144±0.023 mm and 0.163±0.025 mm, respectively). The method has proven to be significantly automated and accurate, with errors around 5% (of the diameter of the nerve), and is easily integrable in current dental planning systems. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  3. Computation of tooth axes of existent and missing teeth from 3D CT images.

    PubMed

    Wang, Yang; Wu, Lin; Guo, Huayan; Qiu, Tiantian; Huang, Yuanliang; Lin, Bin; Wang, Lisheng

    2015-12-01

    Orientations of tooth axes are important quantitative information used in dental diagnosis and surgery planning. However, their computation is a complex problem, and the existing methods have respective limitations. This paper proposes new methods to compute 3D tooth axes from 3D CT images for existent teeth with single root or multiple roots and to estimate 3D tooth axes from 3D CT images for missing teeth. The tooth axis of a single-root tooth will be determined by segmenting the pulp cavity of the tooth and computing the principal direction of the pulp cavity, and the estimation of tooth axes of the missing teeth is modeled as an interpolation problem of some quaternions along a 3D curve. The proposed methods can either avoid the difficult teeth segmentation problem or improve the limitations of existing methods. Their effectiveness and practicality are demonstrated by experimental results of different 3D CT images from the clinic.

  4. Segmentation of the ovine lung in 3D CT Images

    NASA Astrophysics Data System (ADS)

    Shi, Lijun; Hoffman, Eric A.; Reinhardt, Joseph M.

    2004-04-01

    Pulmonary CT images can provide detailed information about the regional structure and function of the respiratory system. Prior to any of these analyses, however, the lungs must be identified in the CT data sets. A popular animal model for understanding lung physiology and pathophysiology is the sheep. In this paper we describe a lung segmentation algorithm for CT images of sheep. The algorithm has two main steps. The first step is lung extraction, which identifies the lung region using a technique based on optimal thresholding and connected components analysis. The second step is lung separation, which separates the left lung from the right lung by identifying the central fissure using an anatomy-based method incorporating dynamic programming and a line filter algorithm. The lung segmentation algorithm has been validated by comparing our automatic method to manual analysis for five pulmonary CT datasets. The RMS error between the computer-defined and manually-traced boundary is 0.96 mm. The segmentation requires approximately 10 minutes for a 512x512x400 dataset on a PC workstation (2.40 GHZ CPU, 2.0 GB RAM), while it takes human observer approximately two hours to accomplish the same task.

  5. Calculation of strain images of a breast-mimicking phantom from 3D CT image data.

    PubMed

    Kim, Jae G; Aowlad Hossain, A B M; Shin, Jong H; Lee, Soo Y

    2012-09-01

    Elastography is a medical imaging modality to visualize the elasticity of soft tissues. Ultrasound and MRI have been exclusively used for elastography of soft tissues since they can sensitize the tissues' minute displacements of an order of μm. It is known that ultrasound and MRI elastography show cancerous tissues with much higher contrast than conventional ultrasound and MRI. To evaluate possibility of combining elastography with x-ray imaging, we have calculated strain images of a breast-mimicking phantom from its 3D CT image data. We first simulated the x-ray elastography using a FEM model which incorporated both the elasticity and x-ray attenuation behaviors of breast tissues. After validating the x-ray elastography scheme by simulation, we made a breast-mimicking phantom that contained a hard inclusion against soft background. With a micro-CT, we took 3D images of the phantom twice, changing the compressing force to the phantom. From the two 3D phantom images taken with two different compression ratios, we calculated the displacement vector maps that represented the compression-induced pixel displacements. In calculating the displacement vectors, we tracked the movements of image feature patterns from the less-compressed-phantom images to the more-compressed-phantom images using the 3D image correlation technique. We obtained strain images of the phantom by differentiating the displacement vector maps. The FEM simulation has shown that x-ray strain imaging is possible by tracking image feature patterns in the 3D CT images of the breast-mimicking phantom. The experimental displacement and strain images of a breast-mimicking phantom, obtained from the 3D micro-CT images taken with 0%-3% compression ratios, show behaviors similar to the FEM simulation results. The contrast and noise performance of the strain images improves as the phantom compression ratio increases. We have experimentally shown that we can improve x-ray strain image quality by applying 3D

  6. 3D CT-Video Fusion for Image-Guided Bronchoscopy

    PubMed Central

    Higgins, William E.; Helferty, James P.; Lu, Kongkuo; Merritt, Scott A.; Rai, Lav; Yu, Kun-Chang

    2008-01-01

    Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patient’s three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods. PMID:18096365

  7. 3D CT-video fusion for image-guided bronchoscopy.

    PubMed

    Higgins, William E; Helferty, James P; Lu, Kongkuo; Merritt, Scott A; Rai, Lav; Yu, Kun-Chang

    2008-04-01

    Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patient's three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods.

  8. Segmentation of brain blood vessels using projections in 3-D CT angiography images.

    PubMed

    Babin, Danilo; Vansteenkiste, Ewout; Pizurica, Aleksandra; Philips, Wilfried

    2011-01-01

    Segmenting cerebral blood vessels is of great importance in diagnostic and clinical applications, especially in quantitative diagnostics and surgery on aneurysms and arteriovenous malformations (AVM). Segmentation of CT angiography images requires algorithms robust to high intensity noise, while being able to segment low-contrast vessels. Because of this, most of the existing methods require user intervention. In this work we propose an automatic algorithm for efficient segmentation of 3-D CT angiography images of cerebral blood vessels. Our method is robust to high intensity noise and is able to accurately segment blood vessels with high range of luminance values, as well as low-contrast vessels.

  9. Automatic seed picking for brachytherapy postimplant validation with 3D CT images.

    PubMed

    Zhang, Guobin; Sun, Qiyuan; Jiang, Shan; Yang, Zhiyong; Ma, Xiaodong; Jiang, Haisong

    2017-06-22

    Postimplant validation is an indispensable part in the brachytherapy technique. It provides the necessary feedback to ensure the quality of operation. The ability to pick implanted seed relates directly to the accuracy of validation. To address it, an automatic approach is proposed for picking implanted brachytherapy seeds in 3D CT images. In order to pick seed configuration (location and orientation) efficiently, the approach starts with the segmentation of seed from CT images using a thresholding filter which based on gray-level histogram. Through the process of filtering and denoising, the touching seed and single seed are classified. The true novelty of this approach is found in the application of the canny edge detection and improved concave points matching algorithm to separate touching seeds. Through the computation of image moments, the seed configuration can be determined efficiently. Finally, two different experiments are designed to verify the performance of the proposed approach: (1) physical phantom with 60 model seeds, and (2) patient data with 16 cases. Through assessment of validated results by a medical physicist, the proposed method exhibited promising results. Experiment on phantom demonstrates that the error of seed location and orientation is within ([Formula: see text]) mm and ([Formula: see text])[Formula: see text], respectively. In addition, the most seed location and orientation error is controlled within 0.8 mm and 3.5[Formula: see text] in all cases, respectively. The average process time of seed picking is 8.7 s per 100 seeds. In this paper, an automatic, efficient and robust approach, performed on CT images, is proposed to determine the implanted seed location as well as orientation in a 3D workspace. Through the experiments with phantom and patient data, this approach also successfully exhibits good performance.

  10. Inter-plane artifact suppression in tomosynthesis using 3D CT image data

    PubMed Central

    2011-01-01

    the proposed method. Conclusions The proposed tomosynthesis technique can improve image contrast with aids of 3D whole volume CT images. Even though local tomosynthesis needs extra 3D CT scanning, it may find clinical applications in special situations in which extra 3D CT scan is already available or allowed. PMID:22151538

  11. Inter-plane artifact suppression in tomosynthesis using 3D CT image data.

    PubMed

    Kim, Jae G; Jin, Seung O; Cho, Min H; Lee, Soo Y

    2011-12-10

    proposed tomosynthesis technique can improve image contrast with aids of 3D whole volume CT images. Even though local tomosynthesis needs extra 3D CT scanning, it may find clinical applications in special situations in which extra 3D CT scan is already available or allowed.

  12. Reliability analysis of Cobb angle measurements of congenital scoliosis using X-ray and 3D-CT images.

    PubMed

    Tauchi, Ryoji; Tsuji, Taichi; Cahill, Patrick J; Flynn, John M; Flynn, John M; Glotzbecker, Michael; El-Hawary, Ron; Heflin, John A; Imagama, Shiro; Joshi, Ajeya P; Nohara, Ayato; Ramirez, Norman; Roye, David P; Saito, Toshiki; Sawyer, Jeffrey R; Smith, John T; Kawakami, Noriaki

    2016-01-01

    Therapeutic decisions for congenital scoliosis rely on Cobb angle measurements on consecutive radiographs. There have been no studies documenting the variability of measuring the Cobb angle using 3D-CT images in children with congenital scoliosis. The purpose of this study was to compare the reliability and measurement errors using X-ray images and those utilizing 3D-CT images. The X-ray and 3D-CT images of 20 patients diagnosed with congenital scoliosis were used to assess the reliability of the digital 3D-CT images for the measurement of the Cobb angle. Thirteen observers performed the measurements, and each image was analyzed by each observer twice with a minimum interval of 1 week between measurements. The analysis of intraobserver variation was expressed as the mean absolute difference (MAD) and standard deviation (SD) between measurements and the intraclass correlation coefficient (IaCC) of the measurements. In addition, the interobserver variation was expressed as the MAD and interclass correlation coefficient (IeCC). The average MAD and SD was 4.5° and 3.2° by the X-ray method and 3.7° and 2.6° by the 3D-CT method. The intraobserver and interobserver intraclass ICCs were excellent in both methods (X-ray: IaCC 0.835-0.994 IeCC 0.847, 3D-CT: IaCC 0.819-0.996 IeCC 0.893). There was no significant MAD difference between X-ray and 3D-CT images in measuring each type of congenital scoliosis by each observer. Results of Cobb angle measurements in patients with congenital scoliosis using X-ray images in the frontal plane could be reproduced with almost the same measurement variance (3°-4° measurement error) using 3D-CT images. This suggests that X-ray images are clinically useful for assessing any type of congenital scoliosis about measuring the Cobb angle alone. However, since 3D-CT can provide more detailed images of the anterior and posterior components of malformed vertebrae, the volume of information that can be obtained by evaluating them has

  13. Geometry-based vs. intensity-based medical image registration: A comparative study on 3D CT data.

    PubMed

    Savva, Antonis D; Economopoulos, Theodore L; Matsopoulos, George K

    2016-02-01

    Spatial alignment of Computed Tomography (CT) data sets is often required in numerous medical applications and it is usually achieved by applying conventional exhaustive registration techniques, which are mainly based on the intensity of the subject data sets. Those techniques consider the full range of data points composing the data, thus negatively affecting the required processing time. Alternatively, alignment can be performed using the correspondence of extracted data points from both sets. Moreover, various geometrical characteristics of those data points can be used, instead of their chromatic properties, for uniquely characterizing each point, by forming a specific geometrical descriptor. This paper presents a comparative study reviewing variations of geometry-based, descriptor-oriented registration techniques, as well as conventional, exhaustive, intensity-based methods for aligning three-dimensional (3D) CT data pairs. In this context, three general image registration frameworks were examined: a geometry-based methodology featuring three distinct geometrical descriptors, an intensity-based methodology using three different similarity metrics, as well as the commonly used Iterative Closest Point algorithm. All techniques were applied on a total of thirty 3D CT data pairs with both known and unknown initial spatial differences. After an extensive qualitative and quantitative assessment, it was concluded that the proposed geometry-based registration framework performed similarly to the examined exhaustive registration techniques. In addition, geometry-based methods dramatically improved processing time over conventional exhaustive registration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Segmentation of bone structures in 3D CT images based on continuous max-flow optimization

    NASA Astrophysics Data System (ADS)

    Pérez-Carrasco, J. A.; Acha-Piñero, B.; Serrano, C.

    2015-03-01

    In this paper an algorithm to carry out the automatic segmentation of bone structures in 3D CT images has been implemented. Automatic segmentation of bone structures is of special interest for radiologists and surgeons to analyze bone diseases or to plan some surgical interventions. This task is very complicated as bones usually present intensities overlapping with those of surrounding tissues. This overlapping is mainly due to the composition of bones and to the presence of some diseases such as Osteoarthritis, Osteoporosis, etc. Moreover, segmentation of bone structures is a very time-consuming task due to the 3D essence of the bones. Usually, this segmentation is implemented manually or with algorithms using simple techniques such as thresholding and thus providing bad results. In this paper gray information and 3D statistical information have been combined to be used as input to a continuous max-flow algorithm. Twenty CT images have been tested and different coefficients have been computed to assess the performance of our implementation. Dice and Sensitivity values above 0.91 and 0.97 respectively were obtained. A comparison with Level Sets and thresholding techniques has been carried out and our results outperformed them in terms of accuracy.

  15. Real-time respiratory phase matching between 2D fluoroscopic images and 3D CT images for precise percutaneous lung biopsy.

    PubMed

    Weon, Chijun; Kim, Mina; Park, Chang Min; Ra, Jong Beom

    2017-08-20

    A 3D CT image is used along with real-time 2D fluoroscopic images in the state-of-the-art cone-beam CT system to guide percutaneous lung biopsy (PLB). To improve the guiding accuracy by compensating for respiratory motion, we propose an algorithm for real-time matching of 2D fluoroscopic images to multiple 3D CT images of different respiratory phases that is robust to the small movement and deformation due to cardiac motion. Based on the transformations obtained from non-rigid registration between two 3D CT images acquired at expiratory and inspiratory phases, we first generate sequential 3D CT images (or a 4D CT image) and the corresponding 2D digitally reconstructed radiographs (DRRs) of vessels. We then determine 3D CT images corresponding to each real-time 2D fluoroscopic image, by matching the 2D fluoroscopic image to a 2D DRR. Quantitative evaluations performed with 20 clinical datasets show that registration errors of anatomical features between a 2D fluoroscopic image and its matched 2D DRR are less than 3mm on average. Registration errors of a target lesion are determined to be roughly 3mm on average for 10 datasets. We propose a real-time matching algorithm to compensate for respiratory motion between a 2D fluoroscopic image and 3D CT images of the lung, regardless of cardiac motion, based on a newly improved matching measure. The proposed algorithm can improve the accuracy of a guiding system for the PLB by providing 3D images precisely registered to 2D fluoroscopic images in real-time, without time-consuming respiratory gated or cardiac gated CT images. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. Piecewise-diffeomorphic image registration: application to the motion estimation between 3D CT lung images with sliding conditions.

    PubMed

    Risser, Laurent; Vialard, François-Xavier; Baluwala, Habib Y; Schnabel, Julia A

    2013-02-01

    In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  17. Automated assessment of breast tissue density in non-contrast 3D CT images without image segmentation based on a deep CNN

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Kano, Takuya; Koyasu, Hiromi; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Matsuo, Masayuki; Fujita, Hiroshi

    2017-03-01

    This paper describes a novel approach for the automatic assessment of breast density in non-contrast three-dimensional computed tomography (3D CT) images. The proposed approach trains and uses a deep convolutional neural network (CNN) from scratch to classify breast tissue density directly from CT images without segmenting the anatomical structures, which creates a bottleneck in conventional approaches. Our scheme determines breast density in a 3D breast region by decomposing the 3D region into several radial 2D-sections from the nipple, and measuring the distribution of breast tissue densities on each 2D section from different orientations. The whole scheme is designed as a compact network without the need for post-processing and provides high robustness and computational efficiency in clinical settings. We applied this scheme to a dataset of 463 non-contrast CT scans obtained from 30- to 45-year-old-women in Japan. The density of breast tissue in each CT scan was assigned to one of four categories (glandular tissue within the breast <25%, 25%-50%, 50%-75%, and >75%) by a radiologist as ground truth. We used 405 CT scans for training a deep CNN and the remaining 58 CT scans for testing the performance. The experimental results demonstrated that the findings of the proposed approach and those of the radiologist were the same in 72% of the CT scans among the training samples and 76% among the testing samples. These results demonstrate the potential use of deep CNN for assessing breast tissue density in non-contrast 3D CT images.

  18. Computer-aided diagnosis for osteoporosis using chest 3D CT images

    NASA Astrophysics Data System (ADS)

    Yoneda, K.; Matsuhiro, M.; Suzuki, H.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.

    2016-03-01

    The patients of osteoporosis comprised of about 13 million people in Japan and it is one of the problems the aging society has. In order to prevent the osteoporosis, it is necessary to do early detection and treatment. Multi-slice CT technology has been improving the three dimensional (3-D) image analysis with higher body axis resolution and shorter scan time. The 3-D image analysis using multi-slice CT images of thoracic vertebra can be used as a support to diagnose osteoporosis and at the same time can be used for lung cancer diagnosis which may lead to early detection. We develop automatic extraction and partitioning algorithm for spinal column by analyzing vertebral body structure, and the analysis algorithm of the vertebral body using shape analysis and a bone density measurement for the diagnosis of osteoporosis. Osteoporosis diagnosis support system obtained high extraction rate of the thoracic vertebral in both normal and low doses.

  19. Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2008-03-01

    Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26 cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.

  20. Geodesic Distance Algorithm for Extracting the Ascending Aorta from 3D CT Images

    PubMed Central

    Jang, Yeonggul; Jung, Ho Yub; Hong, Youngtaek; Cho, Iksung; Shim, Hackjoon; Chang, Hyuk-Jae

    2016-01-01

    This paper presents a method for the automatic 3D segmentation of the ascending aorta from coronary computed tomography angiography (CCTA). The segmentation is performed in three steps. First, the initial seed points are selected by minimizing a newly proposed energy function across the Hough circles. Second, the ascending aorta is segmented by geodesic distance transformation. Third, the seed points are effectively transferred through the next axial slice by a novel transfer function. Experiments are performed using a database composed of 10 patients' CCTA images. For the experiment, the ground truths are annotated manually on the axial image slices by a medical expert. A comparative evaluation with state-of-the-art commercial aorta segmentation algorithms shows that our approach is computationally more efficient and accurate under the DSC (Dice Similarity Coefficient) measurements. PMID:26904151

  1. Combining population and patient-specific characteristics for prostate segmentation on 3D CT images

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M.; Fei, Baowei

    2016-03-01

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.

  2. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images

    PubMed Central

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M.; Fei, Baowei

    2016-01-01

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy. PMID:27660382

  3. Automated torso organ segmentation from 3D CT images using conditional random field

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2016-03-01

    This paper presents a segmentation method for torso organs using conditional random field (CRF) from medical images. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. In this paper, we propose an organ segmentation method using structured output learning which is based on probabilistic graphical model. The proposed method utilizes CRF on three-dimensional grids as probabilistic graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weight parameters of the CRF using stochastic gradient descent algorithm and estimate organ labels for a given image by maximum a posteriori (MAP) estimation. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 6.6%. The DICE coefficients of right lung, left lung, heart, liver, spleen, right kidney, and left kidney are 0.94, 0.92, 0.65, 0.67, 0.36, 0.38, and 0.37, respectively.

  4. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  5. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images.

    PubMed

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M; Fei, Baowei

    2016-02-27

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.

  6. Reconstruction of 4D-CT from a Single Free-Breathing 3D-CT by Spatial-Temporal Image Registration

    PubMed Central

    Wu, Guorong; Wang, Qian; Lian, Jun; Shen, Dinggang

    2011-01-01

    In the radiation therapy of lung cancer, a free-breathing 3D-CT image is usually acquired in the treatment day for image-guided patient setup, by registering with the free-breathing 3D-CT image acquired in the planning day. In this way, the optimal dose plan computed in the planning day can be transferred onto the treatment day for cancer radiotherapy. However, patient setup based on the simple registration of the free-breathing 3D-CT images of the planning and the treatment days may mislead the radiotherapy, since the free-breathing 3D-CT is actually the mixed-phase image, with different slices often acquired from different respiratory phases. Moreover, a 4D-CT that is generally acquired in the planning day for improvement of dose planning is often ignored for guiding patient setup in the treatment day. To overcome these limitations, we present a novel two-step method to reconstruct the 4D-CT from a single free-breathing 3D-CT of the treatment day, by utilizing the 4D-CT model built in the planning day. Specifically, in the first step, we proposed a new spatial-temporal registration algorithm to align all phase images of the 4D-CT acquired in the planning day, for building a 4D-CT model with temporal correspondences established among all respiratory phases. In the second step, we first determine the optimal phase for each slice of the free-breathing (mixed-phase) 3D-CT of the treatment day by comparing with the 4D-CT of the planning day and thus obtain a sequence of partial 3D-CT images for the treatment day, each with only the incomplete image information in certain slices; and then we reconstruct a complete 4D-CT for the treatment day by warping the 4D-CT of the planning day (with complete information) to the sequence of partial 3D-CT images of the treatment day, under the guidance of the 4D-CT model built in the planning day. We have comprehensively evaluated our 4D-CT model building algorithm on a public lung image database, achieving the best registration

  7. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  8. Tracking time interval changes of pulmonary nodules on follow-up 3D CT images via image-based risk score of lung cancer

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2013-03-01

    In this paper, we present a computer-aided follow-up (CAF) scheme to support physicians to track interval changes of pulmonary nodules on three dimensional (3D) CT images and to decide the treatment strategies without making any under or over treatment. Our scheme involves analyzing CT histograms to evaluate the volumetric distribution of CT values within pulmonary nodules. A variational Bayesian mixture modeling framework translates the image-derived features into an image-based risk score for predicting the patient recurrence-free survival. Through applying our scheme to follow-up 3D CT images of pulmonary nodules, we demonstrate the potential usefulness of the CAF scheme which can provide the trajectories that can characterize time interval changes of pulmonary nodules.

  9. A fast rigid-registration method of inferior limb X-ray image and 3D CT images for TKA surgery

    NASA Astrophysics Data System (ADS)

    Ito, Fumihito; O. D. A, Prima; Uwano, Ikuko; Ito, Kenzo

    2010-03-01

    In this paper, we propose a fast rigid-registration method of inferior limb X-ray films (two-dimensional Computed Radiography (CR) images) and three-dimensional Computed Tomography (CT) images for Total Knee Arthroplasty (TKA) surgery planning. The position of the each bone, such as femur and tibia (shin bone), in X-ray film and 3D CT images is slightly different, and we must pay attention how to use the two different images, since X-ray film image is captured in the standing position, and 3D CT is captured in decubitus (face up) position, respectively. Though the conventional registration mainly uses cross-correlation function between two images,and utilizes optimization techniques, it takes enormous calculation time and it is difficult to use it in interactive operations. In order to solve these problems, we calculate the center line (bone axis) of femur and tibia (shin bone) automatically, and we use them as initial positions for the registration. We evaluate our registration method by using three patient's image data, and we compare our proposed method and a conventional registration, which uses down-hill simplex algorithm. The down-hill simplex method is an optimization algorithm that requires only function evaluations, and doesn't need the calculation of derivatives. Our registration method is more effective than the downhill simplex method in computational time and the stable convergence. We have developed the implant simulation system on a personal computer, in order to support the surgeon in a preoperative planning of TKA. Our registration method is implemented in the simulation system, and user can manipulate 2D/3D translucent templates of implant components on X-ray film and 3D CT images.

  10. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    PubMed

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-07-21

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging

  11. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images.

    PubMed

    Jonić, S; Thévenaz, P; Zheng, G; Nolte, L-P; Unser, M

    2006-01-01

    We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.

  12. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images

    PubMed Central

    Thévenaz, P.; Zheng, G.; Nolte, L. -P.; Unser, M.

    2006-01-01

    We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers. PMID:23165033

  13. A case of boomerang dysplasia with a novel causative mutation in filamin B: identification of typical imaging findings on ultrasonography and 3D-CT imaging.

    PubMed

    Tsutsumi, Seiji; Maekawa, Ayako; Obata, Miyuki; Morgan, Timothy; Robertson, Stephen P; Kurachi, Hirohisa

    2012-01-01

    Boomerang dysplasia is a rare lethal osteochondrodysplasia characterized by disorganized mineralization of the skeleton, leading to complete nonossification of some limb bones and vertebral elements, and a boomerang-like aspect to some of the long tubular bones. Like many short-limbed skeletal dysplasias with accompanying thoracic hypoplasia, the potential lethality of the phenotype can be difficult to ascertain prenatally. We report a case of boomerang dysplasia prenatally diagnosed by use of ultrasonography and 3D-CT imaging, and identified a novel mutation in the gene encoding the cytoskeletal protein filamin B (FLNB) postmortem. Findings that aided the radiological diagnosis of this condition in utero included absent ossification of two out of three long bones in each limb and elements of the vertebrae and a boomerang-like shape to the ulnae. The identified mutation is the third described for this disorder and is predicted to lead to amino acid substitution in the actin-binding domain of the filamin B molecule. Copyright © 2012 S. Karger AG, Basel.

  14. Development of a 3D CT scanner using cone beam

    NASA Astrophysics Data System (ADS)

    Endo, Masahiro; Kamagata, Nozomu; Sato, Kazumasa; Hattori, Yuichi; Kobayashi, Shigeo; Mizuno, Shinichi; Jimbo, Masao; Kusakabe, Masahiro

    1995-05-01

    In order to acquire 3D data of high contrast objects such as bone, lung and vessels enhanced by contrast media for use in 3D image processing, we have developed a 3D CT-scanner using cone beam x ray. The 3D CT-scanner consists of a gantry and a patient couch. The gantry consists of an x-ray tube designed for cone beam CT and a large area two-dimensional detector mounted on a single frame and rotated around an object in 12 seconds. The large area detector consists of a fluorescent plate and a charge coupled device video camera. The size of detection area was 600 mm X 450 mm capable of covering the total chest. While an x-ray tube was rotated around an object, pulsed x ray was exposed 30 times a second and 360 projected images were collected in a 12 second scan. A 256 X 256 X 256 matrix image (1.25 mm X 1.25 mm X 1.25 mm voxel) was reconstructed by a high-speed reconstruction engine. Reconstruction time was approximately 6 minutes. Cylindrical water phantoms, anesthetized rabbits with or without contrast media, and a Japanese macaque were scanned with the 3D CT-scanner. The results seem promising because they show high spatial resolution in three directions, though there existed several point to be improved. Possible improvements are discussed.

  15. Efficient and robust 3D CT image reconstruction based on total generalized variation regularization using the alternating direction method.

    PubMed

    Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang

    2015-01-01

    Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research.

  16. Sparse representation-based volumetric super-resolution algorithm for 3D CT images of reservoir rocks

    NASA Astrophysics Data System (ADS)

    Li, Zhengji; Teng, Qizhi; He, Xiaohai; Yue, Guihua; Wang, Zhengyong

    2017-09-01

    The parameter evaluation of reservoir rocks can help us to identify components and calculate the permeability and other parameters, and it plays an important role in the petroleum industry. Until now, computed tomography (CT) has remained an irreplaceable way to acquire the microstructure of reservoir rocks. During the evaluation and analysis, large samples and high-resolution images are required in order to obtain accurate results. Owing to the inherent limitations of CT, however, a large field of view results in low-resolution images, and high-resolution images entail a smaller field of view. Our method is a promising solution to these data collection limitations. In this study, a framework for sparse representation-based 3D volumetric super-resolution is proposed to enhance the resolution of 3D voxel images of reservoirs scanned with CT. A single reservoir structure and its downgraded model are divided into a large number of 3D cubes of voxel pairs and these cube pairs are used to calculate two overcomplete dictionaries and the sparse-representation coefficients in order to estimate the high frequency component. Future more, to better result, a new feature extract method with combine BM4D together with Laplacian filter are introduced. In addition, we conducted a visual evaluation of the method, and used the PSNR and FSIM to evaluate it qualitatively.

  17. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    PubMed

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  18. Automatic segmentation of colon in 3D CT images and removal of opacified fluid using cascade feed forward neural network.

    PubMed

    Gayathri Devi, K; Radhakrishnan, R

    2015-01-01

    Colon segmentation is an essential step in the development of computer-aided diagnosis systems based on computed tomography (CT) images. The requirement for the detection of the polyps which lie on the walls of the colon is much needed in the field of medical imaging for diagnosis of colorectal cancer. The proposed work is focused on designing an efficient automatic colon segmentation algorithm from abdominal slices consisting of colons, partial volume effect, bowels, and lungs. The challenge lies in determining the exact colon enhanced with partial volume effect of the slice. In this work, adaptive thresholding technique is proposed for the segmentation of air packets, machine learning based cascade feed forward neural network enhanced with boundary detection algorithms are used which differentiate the segments of the lung and the fluids which are sediment at the side wall of colon and by rejecting bowels based on the slice difference removal method. The proposed neural network method is trained with Bayesian regulation algorithm to determine the partial volume effect. Experiment was conducted on CT database images which results in 98% accuracy and minimal error rate. The main contribution of this work is the exploitation of neural network algorithm for removal of opacified fluid to attain desired colon segmentation result.

  19. Improving image-guided radiation therapy of lung cancer by reconstructing 4D-CT from a single free-breathing 3D-CT on the treatment day.

    PubMed

    Wu, Guorong; Lian, Jun; Shen, Dinggang

    2012-12-01

    One of the major challenges of lung cancer radiation therapy is how to reduce the margin of treatment field but also manage geometric uncertainty from respiratory motion. To this end, 4D-CT imaging has been widely used for treatment planning by providing a full range of respiratory motion for both tumor and normal structures. However, due to the considerable radiation dose and the limit of resource and time, typically only a free-breathing 3D-CT image is acquired on the treatment day for image-guided patient setup, which is often determined by the image fusion of the free-breathing treatment and planning day 3D-CT images. Since individual slices of two free breathing 3D-CTs are possibly acquired at different phases, two 3D-CTs often look different, which makes the image registration very challenging. This uncertainty of pretreatment patient setup requires a generous margin of radiation field in order to cover the tumor sufficiently during the treatment. In order to solve this problem, our main idea is to reconstruct the 4D-CT (with full range of tumor motion) from a single free-breathing 3D-CT acquired on the treatment day. We first build a super-resolution 4D-CT model from a low-resolution 4D-CT on the planning day, with the temporal correspondences also established across respiratory phases. Next, we propose a 4D-to-3D image registration method to warp the 4D-CT model to the treatment day 3D-CT while also accommodating the new motion detected on the treatment day 3D-CT. In this way, we can more precisely localize the moving tumor on the treatment day. Specifically, since the free-breathing 3D-CT is actually the mixed-phase image where different slices are often acquired at different respiratory phases, we first determine the optimal phase for each local image patch in the free-breathing 3D-CT to obtain a sequence of partial 3D-CT images (with incomplete image data at each phase) for the treatment day. Then we reconstruct a new 4D-CT for the treatment day by

  20. Automatic organ localizations on 3D CT images by using majority-voting of multiple 2D detections based on local binary patterns and Haar-like features

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Yamaguchi, Shoutarou; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-02-01

    This paper describes an approach to accomplish the fast and automatic localization of the different inner organ regions on 3D CT scans. The proposed approach combines object detections and the majority voting technique to achieve the robust and quick organ localization. The basic idea of proposed method is to detect a number of 2D partial appearances of a 3D target region on CT images from multiple body directions, on multiple image scales, by using multiple feature spaces, and vote all the 2D detecting results back to the 3D image space to statistically decide one 3D bounding rectangle of the target organ. Ensemble learning was used to train the multiple 2D detectors based on template matching on local binary patterns and Haar-like feature spaces. A collaborative voting was used to decide the corner coordinates of the 3D bounding rectangle of the target organ region based on the coordinate histograms from detection results in three body directions. Since the architecture of the proposed method (multiple independent detections connected to a majority voting) naturally fits the parallel computing paradigm and multi-core CPU hardware, the proposed algorithm was easy to achieve a high computational efficiently for the organ localizations on a whole body CT scan by using general-purpose computers. We applied this approach to localization of 12 kinds of major organ regions independently on 1,300 torso CT scans. In our experiments, we randomly selected 300 CT scans (with human indicated organ and tissue locations) for training, and then, applied the proposed approach with the training results to localize each of the target regions on the other 1,000 CT scans for the performance testing. The experimental results showed the possibility of the proposed approach to automatically locate different kinds of organs on the whole body CT scans.

  1. Significance of functional hepatic resection rate calculated using 3D CT/99mTc-galactosyl human serum albumin single-photon emission computed tomography fusion imaging

    PubMed Central

    Tsuruga, Yosuke; Kamiyama, Toshiya; Kamachi, Hirofumi; Shimada, Shingo; Wakayama, Kenji; Orimo, Tatsuya; Kakisaka, Tatsuhiko; Yokoo, Hideki; Taketomi, Akinobu

    2016-01-01

    AIM: To evaluate the usefulness of the functional hepatic resection rate (FHRR) calculated using 3D computed tomography (CT)/99mTc-galactosyl-human serum albumin (GSA) single-photon emission computed tomography (SPECT) fusion imaging for surgical decision making. METHODS: We enrolled 57 patients who underwent bi- or trisectionectomy at our institution between October 2013 and March 2015. Of these, 26 patients presented with hepatocellular carcinoma, 12 with hilar cholangiocarcinoma, six with intrahepatic cholangiocarcinoma, four with liver metastasis, and nine with other diseases. All patients preoperatively underwent three-phase dynamic multidetector CT and 99mTc-GSA scintigraphy. We compared the parenchymal hepatic resection rate (PHRR) with the FHRR, which was defined as the resection volume counts per total liver volume counts on 3D CT/99mTc-GSA SPECT fusion images. RESULTS: In total, 50 patients underwent bisectionectomy and seven underwent trisectionectomy. Biliary reconstruction was performed in 15 patients, including hepatopancreatoduodenectomy in two. FHRR and PHRR were 38.6 ± 19.9 and 44.5 ± 16.0, respectively; FHRR was strongly correlated with PHRR. The regression coefficient for FHRR on PHRR was 1.16 (P < 0.0001). The ratio of FHRR to PHRR for patients with preoperative therapies (transcatheter arterial chemoembolization, radiation, radiofrequency ablation, etc.), large tumors with a volume of > 1000 mL, and/or macroscopic vascular invasion was significantly smaller than that for patients without these factors (0.73 ± 0.19 vs 0.82 ± 0.18, P < 0.05). Postoperative hyperbilirubinemia was observed in six patients. Major morbidities (Clavien-Dindo grade ≥ 3) occurred in 17 patients (29.8%). There was no case of surgery-related death. CONCLUSION: Our results suggest that FHRR is an important deciding factor for major hepatectomy, because FHRR and PHRR may be discrepant owing to insufficient hepatic inflow and congestion in patients with preoperative

  2. Significance of functional hepatic resection rate calculated using 3D CT/(99m)Tc-galactosyl human serum albumin single-photon emission computed tomography fusion imaging.

    PubMed

    Tsuruga, Yosuke; Kamiyama, Toshiya; Kamachi, Hirofumi; Shimada, Shingo; Wakayama, Kenji; Orimo, Tatsuya; Kakisaka, Tatsuhiko; Yokoo, Hideki; Taketomi, Akinobu

    2016-05-07

    To evaluate the usefulness of the functional hepatic resection rate (FHRR) calculated using 3D computed tomography (CT)/(99m)Tc-galactosyl-human serum albumin (GSA) single-photon emission computed tomography (SPECT) fusion imaging for surgical decision making. We enrolled 57 patients who underwent bi- or trisectionectomy at our institution between October 2013 and March 2015. Of these, 26 patients presented with hepatocellular carcinoma, 12 with hilar cholangiocarcinoma, six with intrahepatic cholangiocarcinoma, four with liver metastasis, and nine with other diseases. All patients preoperatively underwent three-phase dynamic multidetector CT and (99m)Tc-GSA scintigraphy. We compared the parenchymal hepatic resection rate (PHRR) with the FHRR, which was defined as the resection volume counts per total liver volume counts on 3D CT/(99m)Tc-GSA SPECT fusion images. In total, 50 patients underwent bisectionectomy and seven underwent trisectionectomy. Biliary reconstruction was performed in 15 patients, including hepatopancreatoduodenectomy in two. FHRR and PHRR were 38.6 ± 19.9 and 44.5 ± 16.0, respectively; FHRR was strongly correlated with PHRR. The regression coefficient for FHRR on PHRR was 1.16 (P < 0.0001). The ratio of FHRR to PHRR for patients with preoperative therapies (transcatheter arterial chemoembolization, radiation, radiofrequency ablation, etc.), large tumors with a volume of > 1000 mL, and/or macroscopic vascular invasion was significantly smaller than that for patients without these factors (0.73 ± 0.19 vs 0.82 ± 0.18, P < 0.05). Postoperative hyperbilirubinemia was observed in six patients. Major morbidities (Clavien-Dindo grade ≥ 3) occurred in 17 patients (29.8%). There was no case of surgery-related death. Our results suggest that FHRR is an important deciding factor for major hepatectomy, because FHRR and PHRR may be discrepant owing to insufficient hepatic inflow and congestion in patients with preoperative therapies, macroscopic vascular

  3. New 3D Bolton standards: coregistration of biplane x rays and 3D CT

    NASA Astrophysics Data System (ADS)

    Dean, David; Subramanyan, Krishna; Kim, Eun-Kyung

    1997-04-01

    The Bolton Standards 'normative' cohort (16 males, 16 females) have been invited back to the Bolton-Brush Growth Study Center for new biorthogonal plain film head x-rays and 3D (three dimensional) head CT-scans. A set of 29 3D landmarks were identified on both their biplane head film and 3D CT images. The current 3D CT image is then superimposed onto the landmarks collected from the current biplane head films. Three post-doctoral fellows have collected 37 3D landmarks from the Bolton Standards' 40 - 70 year old biplane head films. These films were captured annually during their growing period (ages 3 - 18). Using 29 of these landmarks the current 3D CT image is next warped (via thin plate spline) to landmarks taken from each participant's 18th year biplane head films, a process that is successively reiterated back to age 3. This process is demonstrated here for one of the Bolton Standards. The outer skull surfaces will be extracted from each warped 3D CT image and an average will be generated for each age/sex group. The resulting longitudinal series of average 'normative' boney skull surface images may be useful for craniofacial patient: diagnosis, treatment planning, stereotactic procedures, and outcomes assessment.

  4. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery

    PubMed Central

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  5. A positioning QA procedure for 2D/2D (kV/MV) and 3D/3D (CT/CBCT) image matching for radiotherapy patient setup.

    PubMed

    Guan, Huaiqun; Hammoud, Rabih; Yin, Fang-Fang

    2009-10-06

    A positioning QA procedure for Varian's 2D/2D (kV/MV) and 3D/3D (planCT/CBCT) matching was developed. The procedure was to check: (1) the coincidence of on-board imager (OBI), portal imager (PI), and cone beam CT (CBCT)'s isocenters (digital graticules) to a linac's isocenter (to a pre-specified accuracy); (2) that the positioning difference detected by 2D/2D (kV/MV) and 3D/3D(planCT/CBCT) matching can be reliably transferred to couch motion. A cube phantom with a 2 mm metal ball (bb) at the center was used. The bb was used to define the isocenter. Two additional bbs were placed on two phantom surfaces in order to define a spatial location of 1.5 cm anterior, 1.5 cm inferior, and 1.5 cm right from the isocenter. An axial scan of the phantom was acquired from a multislice CT simulator. The phantom was set at the linac's isocenter (lasers); either AP MV/R Lat kV images or CBCT images were taken for 2D/2D or 3D/3D matching, respectively. For 2D/2D, the accuracy of each device's isocenter was obtained by checking the distance between the central bb and the digital graticule. Then the central bb in orthogonal DRRs was manually moved to overlay to the off-axis bbs in kV/MV images. For 3D/3D, CBCT was first matched to planCT to check the isocenter difference between the two CTs. Manual shifts were then made by moving CBCT such that the point defined by the two off-axis bbs overlay to the central bb in planCT. (PlanCT can not be moved in the current version of OBI1.4.) The manual shifts were then applied to remotely move the couch. The room laser was used to check the accuracy of the couch movement. For Trilogy (or Ix-21) linacs, the coincidence of imager and linac's isocenter was better than 1 mm (or 1.5 mm). The couch shift accuracy was better than 2 mm.

  6. A universal approach for automatic organ segmentations on 3D CT images based on organ localization and 3D GrabCut

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Ito, Takaaki; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2014-03-01

    This paper describes a universal approach to automatic segmentation of different internal organ and tissue regions in three-dimensional (3D) computerized tomography (CT) scans. The proposed approach combines object localization, a probabilistic atlas, and 3D GrabCut techniques to achieve automatic and quick segmentation. The proposed method first detects a tight 3D bounding box that contains the target organ region in CT images and then estimates the prior of each pixel inside the bounding box belonging to the organ region or background based on a dynamically generated probabilistic atlas. Finally, the target organ region is separated from the background by using an improved 3D GrabCut algorithm. A machine-learning method is used to train a detector to localize the 3D bounding box of the target organ using template matching on a selected feature space. A content-based image retrieval method is used for online generation of a patient-specific probabilistic atlas for the target organ based on a database. A 3D GrabCut algorithm is used for final organ segmentation by iteratively estimating the CT number distributions of the target organ and backgrounds using a graph-cuts algorithm. We applied this approach to localize and segment twelve major organ and tissue regions independently based on a database that includes 1300 torso CT scans. In our experiments, we randomly selected numerous CT scans and manually input nine principal types of inner organ regions for performance evaluation. Preliminary results showed the feasibility and efficiency of the proposed approach for addressing automatic organ segmentation issues on CT images.

  7. Method and phantom to study combined effects of in-plane (x,y) and z-axis resolution for 3D CT imaging.

    PubMed

    Goodenough, David; Levy, Josh; Kristinsson, Smari; Fredriksson, Jesper; Olafsdottir, Hildur; Healy, Austin

    2016-09-01

    Increasingly, the advent of multislice CT scanners, volume CT scanners, and total body spiral acquisition modes has led to the use of Multi Planar Reconstruction and 3D datasets. In considering 3D resolution properties of a CT system it is important to note that both the in-plane (x,y) and z-axis (slice thickness) influence the visualization and detection of objects within the scanned volume. This study investigates ways to consider both the in-plane resolution and the z-axis resolution in a single phantom wherein analytic or visualized analysis can yield information on these combined effects. A new phantom called the "Wave Phantom" is developed that can be used to sample the 3D resolution properties of a CT image, including in-plane (x,y) and z-axis information. The key development in this Wave Phantom is the incorporation of a z-axis aspect of a more traditional step (bar) resolution gauge phantom. The phantom can be examined visually wherein a cutoff level may be seen; and/or the analytic analysis of the various characteristics of the waveform profile by including amplitude, frequency, and slope (rate of climb) of the peaks, can be extracted from the Wave Pattern using mathematical analysis such as the Fourier transform. The combined effect of changes in in-plane resolution and z-axis (thickness), are shown, as well as the effect of changes in either in-plane resolution, or z-axis thickness. Examples of visual images of the Wave pattern as well as the analytic characteristics of the various harmonics of a periodic Wave pattern resulting from changes in resolution filter and/or slice thickness, and position in the field of view are shown. The Wave Phantom offers a promising way to investigate 3D resolution results from combined effect of in-plane (x-y) and z-axis resolution as contrasted to the use of simple 2D resolution gauges that need to be used with separate measures of z-axis dependency, such as angled ramps. It offers both a visual pattern as well as a

  8. Method and phantom to study combined effects of in-plane (x,y) and z-axis resolution for 3D CT imaging.

    PubMed

    Goodenough, David; Levy, Josh; Kristinsson, Smari; Fredriksson, Jesper; Olafsdottir, Hildur; Healy, Austin

    2016-09-08

    Increasingly, the advent of multislice CT scanners, volume CT scanners, and total body spiral acquisition modes has led to the use of Multi Planar Reconstruction and 3D datasets. In considering 3D resolution properties of a CT system it is important to note that both the in-plane (x,y) and z-axis (slice thickness) influence the visual-ization and detection of objects within the scanned volume. This study investigates ways to consider both the in-plane resolution and the z-axis resolution in a single phantom wherein analytic or visualized analysis can yield information on these combined effects. A new phantom called the "Wave Phantom" is developed that can be used to sample the 3D resolution properties of a CT image, including in-plane (x,y) and z-axis information. The key development in this Wave Phantom is the incorporation of a z-axis aspect of a more traditional step (bar) resolution gauge phantom. The phantom can be examined visually wherein a cutoff level may be seen; and/or the analytic analysis of the various characteristics of the waveform profile by including amplitude, frequency, and slope (rate of climb) of the peaks, can be extracted from the Wave Pattern using mathematical analysis such as the Fourier transform. The combined effect of changes in in-plane resolution and z-axis (thickness), are shown, as well as the effect of changes in either in-plane resolu-tion, or z-axis thickness. Examples of visual images of the Wave pattern as well as the analytic characteristics of the various harmonics of a periodic Wave pattern resulting from changes in resolution filter and/or slice thickness, and position in the field of view are shown. The Wave Phantom offers a promising way to investigate 3D resolution results from combined effect of in-plane (x-y) and z-axis resolution as contrasted to the use of simple 2D resolution gauges that need to be used with separate measures of z-axis dependency, such as angled ramps. It offers both a visual pattern as well as a

  9. Reliability of the Planned Pedicle Screw Trajectory versus the Actual Pedicle Screw Trajectory using Intra-operative 3D CT and Image Guidance

    PubMed Central

    Ledonio, Charles G.; Hunt, Matthew A.; Siddiq, Farhan; Polly, David W.

    2016-01-01

    Background Technological advances, including navigation, have been made to improve safety and accuracy of pedicle screw fixation. We evaluated the accuracy of the virtual screw placement (Stealth projection) compared to actual screw placement (intra-operative O-Arm) and examined for differences based on the distance from the reference frame. Methods A retrospective evaluation of prospectively collected data was conducted from January 2013 to September 2013. We evaluated thoracic and lumbosacral pedicle screws placed using intraoperative O-arm and Stealth navigation by obtaining virtual screw projections and intraoperative O-arm images after screw placement. The screw trajectory angle to the midsagittal line and superior endplate was compared in the axial and sagittal views, respectively. Percent error and paired t-test statistics were then performed. Results Thirty-one patients with 240 pedicle screws were analyzed. The mean angular difference between the virtual and actual image in all screws was 2.17° ± 2.20° on axial images and 2.16° ± 2.24° on sagittal images. There was excellent agreement between actual and virtual pedicle screw trajectories in the axial and sagittal plane with ICC = 0.99 (95%CI: 0.992-0.995) (p<0.001) and ICC= 0.81 (95%CI: 0.759-0.855) (p<0.001) respectively. When comparing thoracic and lumbar screws, there was a significant difference in the sagittal angulation between the two distributions. No statistical differences were found distance from the reference frame. Conclusion The virtual projection view is clinically accurate compared to the actual placement on intra-operative CT in both the axial and sagittal views. There is slight imprecision (~2°) in the axial and sagittal planes and a minor difference in the sagittal thoracic and lumbar angulation, although these did not affect clinical outcomes. In general, we find that pedicle screw placement using intraoperative cone beam CT and navigation to be accurate and reliable, and as such

  10. [Spiral computerized tomography with tridimensional reconstruction (spiral 3D CT) in the study of maxillofacial pathology].

    PubMed

    Mevio, E; Calabrò, P; Preda, L; Di Maggio, E M; Caprotti, A

    1995-12-01

    Three dimensional computer reconstruction of CT scans provide head and neck surgeons with an exciting interactive display of clinical anatomy. The 3D CT reconstruction of complex maxillo facial anatomic parts permits a more specific preoperative analysis and surgical planning. Its delineation of disease extension aids the surgeon in developing his own mental three-dimensional image of the regional morphology. Three-dimensional CT permits a clearer perception of the extent of fracture comminution and resulting displacement of fragments. In the case of maxillo-facial tumors, 3D images provide a very clear picture of the extent of erosion involving the adjacent critical organs. Three-dimensional imaging in first generation 3D scanners did have some limitations such as long reconstruction times and inadequate resolution. Subsequent generations, in particular the spiral 3D CT, have eliminated these drawbacks. Furthermore, costs are comparable with those of other computer reconstruction technology that might provide similar images. Representative cases demonstrating the use of 3D CT in maxillofacial surgery and its benefits in planning surgery are discussed.

  11. Thoracic Temporal Subtraction Three Dimensional Computed Tomography (3D-CT): Screening for Vertebral Metastases of Primary Lung Cancers

    PubMed Central

    Iwano, Shingo; Ito, Rintaro; Umakoshi, Hiroyasu; Karino, Takatoshi; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2017-01-01

    Purpose We developed an original, computer-aided diagnosis (CAD) software that subtracts the initial thoracic vertebral three-dimensional computed tomography (3D-CT) image from the follow-up 3D-CT image. The aim of this study was to investigate the efficacy of this CAD software during screening for vertebral metastases on follow-up CT images of primary lung cancer patients. Materials and Methods The interpretation experiment included 30 sets of follow-up CT scans in primary lung cancer patients and was performed by two readers (readers A and B), who each had 2.5 years’ experience reading CT images. In 395 vertebrae from C6 to L3, 46 vertebral metastases were identified as follows: osteolytic metastases (n = 17), osteoblastic metastases (n = 14), combined osteolytic and osteoblastic metastases (n = 6), and pathological fractures (n = 9). Thirty-six lesions were in the anterior component (vertebral body), and 10 lesions were in the posterior component (vertebral arch, transverse process, and spinous process). The area under the curve (AUC) by receiver operating characteristic (ROC) curve analysis and the sensitivity and specificity for detecting vertebral metastases were compared with and without CAD for each observer. Results Reader A detected 47 abnormalities on CT images without CAD, and 33 of them were true-positive metastatic lesions. Using CAD, reader A detected 57 abnormalities, and 38 were true positives. The sensitivity increased from 0.717 to 0.826, and on ROC curve analysis, AUC with CAD was significantly higher than that without CAD (0.849 vs. 0.902, p = 0.021). Reader B detected 40 abnormalities on CT images without CAD, and 36 of them were true-positive metastatic lesions. Using CAD, reader B detected 44 abnormalities, and 39 were true positives. The sensitivity increased from 0.783 to 0.848, and AUC with CAD was nonsignificantly higher than that without CAD (0.889 vs. 0.910, p = 0.341). Both readers detected more osteolytic and osteoblastic

  12. Two-alternative forced-choice evaluation of 3D CT angiograms

    NASA Astrophysics Data System (ADS)

    Habets, Damiaan F.; Chapman, Brian E.; Fox, Allan J.; Hyde, Derek E.; Holdsworth, David W.

    2001-06-01

    This study describes the development and evaluation of an appropriate methodology to study observer performance when comparing 2D and 3D angiographic techniques. 3D-CT angiograms were obtained from patients with cerebral aneurysms or occlusive carotid artery disease and perspective rendering of this 3D data was performed to produce maximum intensity projections (MIP) at view angles identical to digital subtraction angiography (DSA) images. Two-alternative-forced-choice methodology (2AFC) was then used to determine the percent correct (Pc), which is equivalent to the area Az under the receiver-operating characteristic (RTOC) curve. In a comparison of CRA MIP images and DSA images of the intracranial vasculature, the average value of Pc was 0.90+/- 0.03. Perspective reprojection produces digitally reconstructed radiographs (DRRs) with image quality that is nearly equivalent to conventional DSA, with the additional clinical advantage of providing digitally reconstructed images at an unlimited number of viewing angles.

  13. Test of 3D CT reconstructions by EM + TV algorithm from undersampled data

    SciTech Connect

    Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da

    2013-05-06

    Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry with alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.

  14. Association between condylar asymmetry and temporo- mandibular disorders using 3D-CT

    PubMed Central

    Yáñez-Vico, Rosa M.; Iglesias-Linares, Alejandro; Torres-Lagares, Daniel; Solano-Reina, Enrique

    2012-01-01

    Objectives: Using reconstructed three-dimensional computed tomography (3D-CT) models, the purpose of this study was to analyze and compare mandibular condyle morphology in patients with and without temporomandibular disorder (TMD). Study Design: Thirty-two patients were divided into two groups: the first comprised those with TMD (n=18), and the second those who did not have TMD (n=14). A CT of each patient was obtained and reconstructed as a 3D model. The 64 resulting 3D condylar models were evaluated for possible TMD-associated length, width and height asymmetries of the condylar process. Descriptive statistics were used to assess the results and student’s t tests applied to compare the two groups. Results: Statistically significant (p<0.05) vertical, mediolateral and sagittal asymmetries of the condylar process were observed between TMD and non-TMD groups. TMD patients showed less condylar height (p<0.05) in comparison with their asymptomatic counterparts. Conclusions: Using 3D-CT, it was shown that condylar width, height and length asymmetries were a common feature of TMD. Key words:Condilar asymmetry, 3D-computed tomography, X-ray diagnosis , maxillofacial surgery, orthodontics. PMID:22322511

  15. Development of a 3D CT-scanner using a cone beam and video-fluoroscopic system.

    PubMed

    Endo, M; Yoshida, K; Kamagata, N; Satoh, K; Okazaki, T; Hattori, Y; Kobayashi, S; Jimbo, M; Kusakabe, M; Tateno, Y

    1998-01-01

    We describe the design and implementation of a system that acquires three-dimensional (3D) data of high-contrast objects such as bone, lung, and blood vessels (enhanced by contrast agent). This 3D computed tomography (CT) system is based on a cone beam and video-fluoroscopic system and yields data that is amenable to 3D image processing. An X-ray tube and a large area two-dimensional detector were mounted on a single frame and rotated around objects in 12 seconds. The large area detector consisted of a fluorescent plate and a charge coupled device (CCD) video camera. While the X-ray tube was rotated around the object, a pulsed X-ray was generated (30 pulses per second) and 360 projected images were collected in a 12-second scan. A 256 x 256 x 256 matrix image was reconstructed using a high-speed parallel processor. Reconstruction required approximately 6 minutes. Two volunteers underwent scans of the head or chest. High-contrast objects such as bronchial, vascular, and mediastinal structures in the thorax, or bones and air cavities in the head were delineated in a "real" 3D format. Our 3D CT-scanner appears to produce data useful for clinical imaging and 3D image processing.

  16. Development of 3D-CT System Using MIRRORCLE-6X

    NASA Astrophysics Data System (ADS)

    Sasaki, M.; Takaku, J.; Hirai, T.; Yamada, H.

    2007-03-01

    The technique of computed tomography (CT) has been used in various fields, such as medical, non-destructive testing (NDT), baggage checking, etc. A 3D-CT system based on the portable synchrotron "MIRRORCLE"-series will be a novel instrument for these fields. The hard x-rays generated from the "MIRRORCLE" have a wide energy spectrum. Light and thin materials create absorption and refraction contrast in x-ray images by the lower energy component (< 60 keV), and heavy and thick materials create absorption contrast by the higher energy component. In addition, images with higher resolutions can be obtained using "MIRRORCLE" with a small source size of micron order. Thus, high resolution 3D-CT images of specimens containing both light and heavy materials can be obtained using "MIRRORCLE" and a 2D-detector with a wide dynamic range. In this paper, the development and output of a 3D-CT system using the "MIRRORCLE-6X" and a flat panel detector are reported. A 3D image of a piece of concrete was obtained. The detector was a flat panel detector (VARIAN, PAXSCAN2520) with 254 μm pixel size. The object and the detector were set at 50 cm and 250 cm respectively from the x-ray source, so that the magnification was 5x. The x-ray source was a 50 μm Pt rod. The rotation stage and the detector were remote-controlled using a computer, which was originally created using LabView and Visual Basic software. The exposure time was about 20 minutes. The reconstruction calculation was based on the Feldkamp algorithm, and the pixel size was 50 μm. We could observe sub-mm holes and density differences in the object. Thus, the "MIRRORCLE-CV" with 1MeV electron energy, which has same x-ray generation principles, will be an excellent x-ray source for medical diagnostics and NDT.

  17. Development of 3D-CT System Using MIRRORCLE-6X

    SciTech Connect

    Sasaki, M.; Yamada, H.; Takaku, J.; Hirai, T.

    2007-03-30

    The technique of computed tomography (CT) has been used in various fields, such as medical, non-destructive testing (NDT), baggage checking, etc. A 3D-CT system based on the portable synchrotron 'MIRRORCLE'-series will be a novel instrument for these fields. The hard x-rays generated from the 'MIRRORCLE' have a wide energy spectrum. Light and thin materials create absorption and refraction contrast in x-ray images by the lower energy component (< 60 keV), and heavy and thick materials create absorption contrast by the higher energy component. In addition, images with higher resolutions can be obtained using 'MIRRORCLE' with a small source size of micron order. Thus, high resolution 3D-CT images of specimens containing both light and heavy materials can be obtained using 'MIRRORCLE' and a 2D-detector with a wide dynamic range. In this paper, the development and output of a 3D-CT system using the 'MIRRORCLE-6X' and a flat panel detector are reported.A 3D image of a piece of concrete was obtained. The detector was a flat panel detector (VARIAN, PAXSCAN2520) with 254 {mu}m pixel size. The object and the detector were set at 50 cm and 250 cm respectively from the x-ray source, so that the magnification was 5x. The x-ray source was a 50 {mu}m Pt rod. The rotation stage and the detector were remote-controlled using a computer, which was originally created using LabView and Visual Basic software. The exposure time was about 20 minutes. The reconstruction calculation was based on the Feldkamp algorithm, and the pixel size was 50 {mu}m. We could observe sub-mm holes and density differences in the object. Thus, the 'MIRRORCLE-CV' with 1MeV electron energy, which has same x-ray generation principles, will be an excellent x-ray source for medical diagnostics and NDT.

  18. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    SciTech Connect

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  19. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    NASA Astrophysics Data System (ADS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin, Vibin

    2008-09-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  20. Preoperative dual-phase 3D CT angiography assessment of the right hepatic artery before gastrectomy.

    PubMed

    Yamashita, Keishi; Sakuramoto, Shinichi; Mieno, Hiroaki; Shibata, Tomotaka; Nemoto, Masayuki; Katada, Natsuya; Kikuchi, Shiro; Watanabe, Masahiko

    2014-10-01

    In the current study, we evaluated the efficacy of dual-phase three-dimensional (3D) CT angiography (CTA) in the assessment of the vascular anatomy, especially the right hepatic artery (RHA), before gastrectomy. The study initially included 714 consecutive patients being treated for gastric cancer. A dual-phase contrast-enhanced CT scan using 32-multi detector-row CT was performed for all patients. Among the 714 patients, 3D CTA clearly identified anomalies with the RHA arising from the superior mesenteric artery (SMA) in 49 cases (6.9 %). In Michels' classification type IX, the common hepatic artery (CHA) originates only from the SMA. Such cases exhibit defective anatomy for the CHA in conjunction with the celiac-splenic artery system, resulting in direct exposure of the portal vein beneath the #8a lymph node station, which was retrospectively confirmed by video in laparoscopic gastrectomy cases. Fused images of both 3D angiography and venography were obtained, and could have predicted the risk preoperatively, and the surgical finding confirmed its usefulness. Preoperative evaluations using 3D CTA can provide more accurate information about the vessel anatomy. The fused images from 3D CTA have the potential to reduce the intraoperative risks for injuries to critical vessel, such as the portal vein, during gastrectomy.

  1. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement

    PubMed Central

    Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae

    2016-01-01

    Introduction. We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods. We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results. The real values and the PACS measurement changes according to tilt value have no significant correlations (p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements (p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion. Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction. PMID:28070517

  2. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.

    PubMed

    Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young

    2016-01-01

    Introduction. We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods. We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results. The real values and the PACS measurement changes according to tilt value have no significant correlations (p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements (p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion. Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.

  3. Value of 3-D CT in classifying acetabular fractures during orthopedic residency training.

    PubMed

    Garrett, Jeffrey; Halvorson, Jason; Carroll, Eben; Webb, Lawrence X

    2012-05-01

    The complex anatomy of the pelvis and acetabulum have historically made classification and interpretation of acetabular fractures difficult for orthopedic trainees. The addition of 3-dimensional (3-D) computed tomography (CT) scan has gained popularity in preoperative planning, identification, and education of acetabular fractures given their complexity. Therefore, the authors examined the value of 3-D CT compared with conventional radiography in classifying acetabular fractures at different levels of orthopedic training. Their hypothesis was that 3-D CT would improve correct identification of acetabular fractures compared with conventional radiography.The classic Letournel fracture pattern classification system was presented in quiz format to 57 orthopedic residents and 20 fellowship-trained orthopedic traumatologists. A case consisted of (1) plain radiographs and 2-dimensional axial CT scans or (2) 3-D CT scans. All levels of training showed significant improvement in classifying acetabular fractures with 3-D vs 2-D CT, with the greatest benefit from 3-D CT found in junior residents (postgraduate years 1-3).Three-dimensional CT scans can be an effective educational tool for understanding the complex spatial anatomy of the pelvis, learning acetabular fracture patterns, and correctly applying a widely accepted fracture classification system.

  4. 3D CT spine data segmentation and analysis of vertebrae bone lesions.

    PubMed

    Peter, R; Malinsky, M; Ourednicek, P; Jan, J

    2013-01-01

    A method is presented aiming at detecting and classifying bone lesions in 3D CT data of human spine, via Bayesian approach utilizing Markov random fields. A developed algorithm for necessary segmentation of individual possibly heavily distorted vertebrae based on 3D intensity modeling of vertebra types is presented as well.

  5. Interactive navigation-guided ophthalmic plastic surgery: the utility of 3D CT-DCG-guided dacryolocalization in secondary acquired lacrimal duct obstructions

    PubMed Central

    Ali, Mohammad Javed; Singh, Swati; Naik, Milind N; Kaliki, Swathi; Dave, Tarjani Vivek

    2017-01-01

    Aim The aim of this study was to report the preliminary experience with the techniques and utility of navigation-guided, 3D, computed tomography–dacryocystography (CT-DCG) in the management of secondary acquired lacrimal drainage obstructions. Methods Stereotactic surgeries using CT-DCG as the intraoperative image-guiding tool were performed in 3 patients. One patient had nasolacrimal duct obstruction (NLDO) following a complete maxillectomy for a sinus malignancy, and the other 2 had NLDO following extensive maxillofacial trauma. All patients underwent a 3D CT-DCG. Image-guided dacryolocalization (IGDL) was performed using the intraoperative image-guided StealthStation™ system in the electromagnetic mode. All patients underwent navigation-guided powered endoscopic dacryocystorhinostomy (DCR). The utility of intraoperative dacryocystographic guidance and the ability to localize the lacrimal drainage system in the altered endoscopic anatomical milieu were noted. Results Intraoperative geometric localization of the lacrimal sac and the nasolacrimal duct could be easily achieved. Constant orientation of the lacrimal drainage system was possible while navigating in the vicinity of altered endoscopic perilacrimal anatomy. Useful clues with regard to modifications while performing a powered endoscopic DCR could be obtained. Surgeries could be performed with utmost safety and precision, thereby avoiding complications. Detailed preoperative 3D CT-DCG reconstructions with constant intraoperative dacryolocalization were found to be essential for successful outcomes. Conclusion The 3D CT-DCG-guided navigation procedure is very useful while performing endoscopic DCRs in cases of secondary acquired and complex NLDOs. PMID:28115826

  6. Influence of the Alveolar Cleft Type on Preoperative Estimation Using 3D CT Assessment for Alveolar Cleft

    PubMed Central

    Choi, Hang Suk; Choi, Hyun Gon; Kim, Soon Heum; Park, Hyung Jun; Shin, Dong Hyeok; Jo, Dong In; Kim, Cheol Keun

    2012-01-01

    Background The bone graft for the alveolar cleft has been accepted as one of the essential treatments for cleft lip patients. Precise preoperative measurement of the architecture and size of the bone defect in alveolar cleft has been considered helpful for increasing the success rate of bone grafting because those features may vary with the cleft type. Recently, some studies have reported on the usefulness of three-dimensional (3D) computed tomography (CT) assessment of alveolar bone defect; however, no study on the possible implication of the cleft type on the difference between the presumed and actual value has been conducted yet. We aimed to evaluate the clinical predictability of such measurement using 3D CT assessment according to the cleft type. Methods The study consisted of 47 pediatric patients. The subjects were divided according to the cleft type. CT was performed before the graft operation and assessed using image analysis software. The statistical significance of the difference between the preoperative estimation and intraoperative measurement was analyzed. Results The difference between the preoperative and intraoperative values were -0.1±0.3 cm3 (P=0.084). There was no significant intergroup difference, but the groups with a cleft palate showed a significant difference of -0.2±0.3 cm3 (P<0.05). Conclusions Assessment of the alveolar cleft volume using 3D CT scan data and image analysis software can help in selecting the optimal graft procedure and extracting the correct volume of cancellous bone for grafting. Considering the cleft type, it would be helpful to extract an additional volume of 0.2 cm3 in the presence of a cleft palate. PMID:23094242

  7. 3D CT analysis of femoral and tibial tunnel positions after modified transtibial single bundle ACL reconstruction with varus and internal rotation of the tibia.

    PubMed

    Youm, Yoon-Seok; Cho, Sung-Do; Eo, Jin; Lee, Ki-Jae; Jung, Kwang-Hwan; Cha, Jae-Ryong

    2013-08-01

    We analyzed the location of femoral and tibial tunnels by three-dimensional (3D) CT reconstruction images after modified transtibial single bundle (SB) anterior cruciate ligament (ACL) reconstruction, creating a femoral tunnel with varus and internal rotation of the tibia. Data from 50 patients (50 knees) analyzed by 3D CT after modified transtibial SB ACL reconstructions were evaluated. 3D CT images were analyzed according to the quadrant method by Bernard at the femur and the technique of Forsythe at the tibia. The mean distance of the femoral tunnel center locations parallel to the Blumensaat's line was 29.6%±1.9% along line t measured from the posterior condylar surface. The mean distances perpendicular to the Blumensaat's line were 37.9%±2.5% along line h measured from the Blumensaat's line. At the tibia, the mean anterior-to-posterior distance for the tunnel center location was 37.8%±1.2% and the mean medial-to-lateral distance was 50.4%±0.9%. The femoral and tibial tunnels after modified transtibial SB ACL reconstruction creating a femoral tunnel with varus and internal rotation of the tibia (figure-of-4 position) were located between the anatomical anteromedial and posterolateral footprints. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. [Brain 3 D-CT angiography was a useful tool for diagnosis of internal carotid-posterior communicating artery aneurysm: a case of false negative 3 D-MRA].

    PubMed

    Ikeda, K; Iwasaki, Y; Murakami, S; Ichikawa, Y

    1999-09-01

    A 75-year-old woman with hypertension suddenly developed ptosis in the left eyelid. Neurological examination revealed left oculomotor nerve palsy. Brain T 2-weighted imaging showed abnormal flow void sign in the proximal portion of left middle cerebral artery. Other MRIs, including gadolinium enhancement, were normal. However, brain 3 D-MRA, using time-of-flight sequence, did not disclose any intracranial aneurysms. 3 D-CT angiography revealed left internal carotid-posterior communicating artery (IC-PC) aneurysm. Maximum intensity projection display of CT angiography demonstrated the neck and head portions of IC-PC aneurysm (size = 8 mm). Furthermore, 3 D-CT angiography was beneficial for anatomical evaluation of the aneurysm and the surrounding bony structures. The false negative 3 D-MRA of our patient was thought to result from flow-related artifacts, slow blood flow in the aneurysm, the surrounding noise and the localization of aneurysm. False negative findings of cerebral aneurysms occasionally occur on 3 D-MRA or 3 D-CT angiography, in comparison with digital subtraction angiography. Thus, we should pay more attention to assessment of 3 D-MRA and 3 D-CT angiography in patients who have high risks of cerebral aneurysms.

  9. 3D CT to 2D low dose single-plane fluoroscopy registration algorithm for in-vivo knee motion analysis.

    PubMed

    Akter, Masuma; Lambert, Andrew J; Pickering, Mark R; Scarvell, Jennie M; Smith, Paul N

    2014-01-01

    A limitation to accurate automatic tracking of knee motion is the noise and blurring present in low dose X-ray fluoroscopy images. For more accurate tracking, this noise should be reduced while preserving anatomical structures such as bone. Noise in low dose X-ray images is generated from different sources, however quantum noise is by far the most dominant. In this paper we present an accurate multi-modal image registration algorithm which successfully registers 3D CT to 2D single plane low dose noisy and blurred fluoroscopy images that are captured for healthy knees. The proposed algorithm uses a new registration framework including a filtering method to reduce the noise and blurring effect in fluoroscopy images. Our experimental results show that the extra pre-filtering step included in the proposed approach maintains higher accuracy and repeatability for in vivo knee joint motion analysis.

  10. Crouzon syndrome associated with acanthosis nigricans: prenatal 2D and 3D ultrasound findings and postnatal 3D CT findings

    PubMed Central

    Nørgaard, Pernille; Hagen, Casper Petri; Hove, Hanne; Dunø, Morten; Nissen, Kamilla Rothe; Kreiborg, Sven; Jørgensen, Finn Stener

    2012-01-01

    Crouzon syndrome with acanthosis nigricans (CAN) is a very rare condition with an approximate prevalence of 1 per 1 million newborns. We add the first report on prenatal 2D and 3D ultrasound findings in CAN. In addition we present the postnatal 3D CT findings. The diagnosis was confirmed by molecular testing. PMID:23986840

  11. Crouzon syndrome associated with acanthosis nigricans: prenatal 2D and 3D ultrasound findings and postnatal 3D CT findings.

    PubMed

    Nørgaard, Pernille; Hagen, Casper Petri; Hove, Hanne; Dunø, Morten; Nissen, Kamilla Rothe; Kreiborg, Sven; Jørgensen, Finn Stener

    2012-01-01

    Crouzon syndrome with acanthosis nigricans (CAN) is a very rare condition with an approximate prevalence of 1 per 1 million newborns. We add the first report on prenatal 2D and 3D ultrasound findings in CAN. In addition we present the postnatal 3D CT findings. The diagnosis was confirmed by molecular testing.

  12. The relationship between post-traumatic ossicular injuries and conductive hearing loss: A 3D-CT study.

    PubMed

    Maillot, Olivier; Attyé, Arnaud; Boutet, Claire; Boubagra, Kamel; Perolat, Romain; Zanolla, Marion; Grand, Sylvie; Schmerber, Sébastien; Krainik, Alexandre

    2017-09-01

    After a trauma, the conductive ossicular chain may be disrupted by ossicular luxation or fracture. Recent developments in 3D-CT allow a better understanding of ossicular injuries. In this retrospective study, we compared patients with post-traumatic conductive hearing loss (CHL) with those referred without CHL to evaluate the relationship between ossicular injuries and CHL. We also assessed the added value of 3D reconstructions on 2D-CT scan to detect ossicular lesions in patients surgically managed. The CT scans were performed using a 40-section spiral CT scanner in 49 patients with post-traumatic CHL (n=29) and without CHL (n=20). Three radiologists performed independent blind evaluations of 2D-CT and 3D reconstructions to detect ossicular chain injury. We used the t-test to explore differences regarding the number of subjects with ossicular injury in the two groups. We also estimated the diagnostic accuracy and the inter-rater agreement of the 3D-CT reconstructions associated to 2D-CT scan. We identified ossicular abnormality in 14 patients out of 29 and in one patient out of 20 in the CHL and non-CHL groups respectively. There was a significant difference regarding the number of subjects with ossicular lesions between the two groups (P≤0.01). The diagnostic sensitivity of 3D-CT reconstructions associated with 2D-CT ranged from 66% to 100% and the inter-reader agreement ranged from 0.85 to 1, depending of the type of lesion. The relationship between ossicular lesion and the presence of CHL tightly correlated. 3D-CT reconstructions of the temporal bone are useful to assess patients in a post-traumatic context. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. Anatomic ACL reconstruction: the normal central tibial footprint position and a standardised technique for measuring tibial tunnel location on 3D CT.

    PubMed

    Parkinson, B; Gogna, R; Robb, C; Thompson, P; Spalding, T

    2017-05-01

    The aim of this study was to define the normal ACL central tibial footprint position and describe a standardised technique of measuring tibial tunnel location on 3D CT for anatomic single-bundle ACL reconstruction. The central position of the ACL tibial attachment site was determined on 76 MRI scans of young individuals. The central footprint position was referenced in the anterior-posterior (A-P) and medial-lateral (M-L) planes on a grid system over the widest portion of the proximal tibia. 3D CT images of 26 young individuals had a simulated tibial tunnel centred within the bony landmarks of the ACL footprint, and the same grid system was applied over the widest portion of the proximal tibia. The MRI central footprint position was compared to the 3D CT central footprint position to validate the technique and results. The median age of the 76 MRI subjects was 24 years, with 32 females and 44 males. The ACL central footprint position was at 39 (±3 %) and 48 (±2 %), in the A-P and M-L planes, respectively. There was no significant difference in this position between sexes. The median age of the 26 CT subjects was 25.5 years, with 10 females and 16 males. The central position of the bony ACL footprint was at 38 (±2 %) and 48 (±2 %), in the A-P and M-L planes, respectively. The MRI and CT central footprint positions were not significantly different in relation to the medial position, but were different in relation to the anterior position (A-P 39 % vs. 38 %, p = 0.01). The absolute difference between the central MRI and CT reference positions was 0.45 mm. The ACL's normal central tibial footprint reference position has been defined, and the technique of measuring tibial tunnel location with a standardised grid system is described. This study will assist surgeons in evaluating tibial tunnel position in anatomic single-bundle ACL reconstruction. III.

  14. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion

    SciTech Connect

    Yang, Y. X.; Van Reeth, E.; Poh, C. L.; Teo, S.-K.; Tan, C. H.; Tham, I. W. K.

    2015-08-15

    Purpose: Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors’ proposed approach. Methods: A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. Results: The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors’ proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. Conclusions: The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.

  15. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion.

    PubMed

    Yang, Y X; Teo, S-K; Van Reeth, E; Tan, C H; Tham, I W K; Poh, C L

    2015-08-01

    Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors' proposed approach. A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors' proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.

  16. Remote-rendered 3D CT angiography (3DCTA) as an intraoperative aid in cerebrovascular neurosurgery.

    PubMed

    Wilkinson, E P; Shahidi, R; Wang, B; Martin, D P; Adler, J R; Steinberg, G K

    1999-01-01

    To assess the viability and utility of network-based rendering in the treatment of patients with cerebral aneurysms, we implemented an intraoperative rendering system and protocol using both three-dimensional CT angiography (3DCTA) and perspective volume rendering (PVR). A Silicon Graphics InfiniteReality engine was connected via a Fast Ethernet network to a workstation in the neurosurgical operating room. A protocol was developed to isolate bone and vessels using an appropriate transfer function. Three-dimensional CT angiogram images were volume rendered and transmitted to the workstation using a bandwidth-conserving remote rendering system, and were rotated, cut using clipping planes, and viewed using normal and perspective views. Twelve patients with intracranial aneurysms were examined at surgery using this system. Rendering performance at optimal operating bandwidths (50-60 Mb/s) was excellent, with regeneration of a high-resolution image in less than 1 s. Network performance varied in two cases, slowing image regeneration. Surgeons found the images to be useful as an adjunct to conventional imaging in understanding the morphology of complex aneurysms and their relationship to the skull base. Intraoperative volume rendering using 3DCTA is achievable over a network, can reduce hardware costs by amortizing hardware among multiple users, and provides useful imaging information during the surgical treatment of cerebral aneurysms. Future operating suites may incorporate network-transmitted three-dimensional images as additional sources of imaging information. Copyright 1999 Wiley-Liss, Inc.

  17. Intracranial aneurysm segmentation in 3D CT angiography: method and quantitative validation

    NASA Astrophysics Data System (ADS)

    Firouzian, Azadeh; Manniesing, R.; Flach, Z. H.; Risselada, R.; van Kooten, F.; Sturkenboom, M. C. J. M.; van der Lugt, A.; Niessen, W. J.

    2010-03-01

    Accurately quantifying aneurysm shape parameters is of clinical importance, as it is an important factor in choosing the right treatment modality (i.e. coiling or clipping), in predicting rupture risk and operative risk and for pre-surgical planning. The first step in aneurysm quantification is to segment it from other structures that are present in the image. As manual segmentation is a tedious procedure and prone to inter- and intra-observer variability, there is a need for an automated method which is accurate and reproducible. In this paper a novel semi-automated method for segmenting aneurysms in Computed Tomography Angiography (CTA) data based on Geodesic Active Contours is presented and quantitatively evaluated. Three different image features are used to steer the level set to the boundary of the aneurysm, namely intensity, gradient magnitude and variance in intensity. The method requires minimum user interaction, i.e. clicking a single seed point inside the aneurysm which is used to estimate the vessel intensity distribution and to initialize the level set. The results show that the developed method is reproducible, and performs in the range of interobserver variability in terms of accuracy.

  18. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  19. Mesenteric Vasculature-guided Small Bowel Segmentation on 3D CT

    PubMed Central

    Zhang, Weidong; Liu, Jiamin; Yao, Jianhua; Louie, Adeline; Nguyen, Tan B.; Wank, Stephen; Nowinski, Wieslaw L.; Summers, Ronald M.

    2014-01-01

    Due to its importance and possible applications in visualization, tumor detection and pre-operative planning, automatic small bowel segmentation is essential for computer-aided diagnosis of small bowel pathology. However, segmenting the small bowel directly on CT scans is very difficult because of the low image contrast on CT scans and high tortuosity of the small bowel and its close proximity to other abdominal organs. Motivated by the intensity characteristics of abdominal CT images, the anatomic relationship between the mesenteric vasculature and the small bowel, and potential usefulness of the mesenteric vasculature for establishing the path of the small bowel, we propose a novel mesenteric vasculature map-guided method for small bowel segmentation on high-resolution CT angiography scans. The major mesenteric arteries are first segmented using a vessel tracing method based on multi-linear subspace vessel model and Bayesian inference. Second, multi-view, multi-scale vesselness enhancement filters are used to segment small vessels, and vessels directly or indirectly connecting to the superior mesenteric artery are classified as mesenteric vessels. Third, a mesenteric vasculature map is built by linking vessel bifurcation points, and the small bowel is segmented by employing the mesenteric vessel map and fuzzy connectness. The method was evaluated on 11 abdominal CT scans of patients suspected of having carcinoid tumors with manually labeled reference standard. The result, 82.5% volume overlap accuracy compared with the reference standard, shows it is feasible to segment the small bowel on CT scans using the mesenteric vasculature as a roadmap. PMID:23807437

  20. A 3-D CT Analysis of Screw and Suture-Button Fixation of the Syndesmosis.

    PubMed

    Schon, Jason M; Williams, Brady T; Venderley, Melanie B; Dornan, Grant J; Backus, Jonathon D; Turnbull, Travis Lee; LaPrade, Robert F; Clanton, Thomas O

    2017-02-01

    Historically, syndesmosis injuries have been repaired with screw fixation; however, some suggest that suture-button constructs may provide a more accurate anatomic and physiologic reduction. The purpose of this study was to compare changes in the volume of the syndesmotic space following screw or suture-button fixation using a preinjury and postoperative 3-D computed tomography (CT) model. The null hypothesis was that no difference would be observed among repair techniques. Twelve pairs of cadaveric specimens were dissected to identify the syndesmotic ligaments. Specimens were imaged with CT prior to the creation of a complete syndesmosis injury and were subsequently repaired using 1 of 3 randomly assigned techniques: (a) one 3.5-mm cortical screw, (b) 1 suture-button, and (c) 2 suture-buttons. Specimens were imaged postoperatively with CT. 3-D models of all scans and tibiofibular joint space volumes were calculated to assess restoration of the native syndesmosis. Analysis of variance and Tukey's method were used to compare least squares mean differences from the intact syndesmosis among repair techniques. For each of the 3 fixation methods, the total postoperative syndesmosis volume was significantly decreased relative to the intact state. The total mean decreases in volume compared with the intact state for the 1-suture-button construct, 2-suture-button construct, and syndesmotic screw were -561 mm(3) (95% CI, -878 to -244), -964 mm(3) (95% CI, -1281 to -647) and -377 mm(3) (95% CI, -694 to -60), respectively. All repairs notably reduced the volume of the syndesmosis beyond the intact state. Fixation with 1 suture-button was not significantly different from screw or 2-suture-button fixation; however, fixation with 2 suture-buttons resulted in significantly decreased volume compared with screw fixation. The results of this study suggest that the 1-suture-button repair technique and the screw fixation repair technique were comparable for reduction of syndesmosis

  1. Preoperative diagnosis of sentinel lymph node (SLN) metastasis using 3D CT lymphography (CTLG).

    PubMed

    Nakagawa, Misako; Morimoto, Masami; Takechi, Hirokazu; Tadokoro, Yukiko; Tangoku, Akira

    2016-05-01

    Sentinel lymph node biopsy (SLNB) became a standard procedure for patients with early breast cancer, however, an indication of SLN navigation to metastatic disease may lead to misdiagnosis for staging. Preoperative CTLG with a water-soluble iodinated contrast medium visualizes the correct primary SLNs and its afferent lymphatic channels surrounding detailed anatomy, therefore it can predict LN metastasis by visualizing the lymph vessel obstruction or stain defect of the SLN by tumor. The current study presents the value of CTLG for preoperative prediction for SLN status. A total of 228 patients with Tis-T2 breast cancer who did not receive primary chemotherapy were studied. SLN metastasis was diagnosed according to the following staining patterns of SLNs and afferent lymphatic vessels: stain defect of SLN, obstruction, stagnation, dilation, and detour of the lymphatic vessels by tumor occupation. The diagnosis was compared with the pathological results to evaluate the accuracy of prediction for SLN metastasis using CTLG. Twenty-seven of 228 patients had metastatic SLN pathologically. Twenty-five of these were diagnosed as metastatic preoperatively. The accuracy for metastatic diagnosis using CTLG was 89.0%, sensitivity was 92.6%, and specificity was 88.6%. The positive predictive value was 52.1% and negative predictive value was 98.8%. CTLG can select the candidate with truly node negative cases in early breast cancer patients, because it predicts lymph node metastasis preoperatively from natural status of the lymphographic image. It also might omit the SLN biopsy itself.

  2. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes

    PubMed Central

    Eapen, Maya; Korah, Reeba; Geetha, G.

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  3. Positioning evaluation of corrective osteotomy for the malunited radius: 3-D CT versus 2-D radiographs.

    PubMed

    Vroemen, Joy C; Dobbe, Johannes G G; Strackee, Simon D; Streekstra, Geert J

    2013-02-01

    The authors retrospectively investigated the postoperative position of the distal radius after a corrective osteotomy using 2-dimensional (2-D) and 3-dimensional (3-D) imaging techniques to determine whether malposition correlates with clinical outcome. Twenty-five patients who underwent a corrective osteotomy were available for follow-up. The residual positioning errors of the distal end were determined retrospectively using standard 2-D radiographs and 3-D computed tomography evaluations based on a scan of both forearms, with the contralateral healthy radius serving as reference. For 3-D analysis, use of an anatomical coordinate system for each reference bone allowed the authors to express the residual malalignment parameters in displacements (Δx, Δy, Δz) and rotations (Δφx, Δφy, Δφz) for aligning the affected bone in a standardized way with the corresponding reference bone. The authors investigated possible correlations between malalignment parameters and clinical outcome using patients' questionnaires. Two-dimensional radiographic evaluation showed a radial inclination of 24.9°±6.8°, a palmar tilt of 4.5°±8.6°, and an ulnar variance of 0.8±1.7 mm. With 3-D analysis, residual displacements were 2.6±3 (Δx), 2.4±3 (Δy), and -2.2±4 (Δz) mm. Residual rotations were -6.2°±10° (Δφx), 0.3°±7° (Δφy), and -5.1°±10° (Δφz). The large standard deviation is indicative of persistent malalignment in individual cases. Statistically significant correlations were found between 3-D rotational deficits and clinical outcome but not between 2-D evaluation parameters. Considerable residual malalignments and statistically significant correlations between malalignment parameters and clinical outcome confirm the need for better positioning techniques.

  4. Acceleration of EM-Based 3D CT Reconstruction Using FPGA.

    PubMed

    Choi, Young-Kyu; Cong, Jason

    2016-06-01

    Reducing radiation doses is one of the key concerns in computed tomography (CT) based 3D reconstruction. Although iterative methods such as the expectation maximization (EM) algorithm can be used to address this issue, applying this algorithm to practice is difficult due to the long execution time. Our goal is to decrease this long execution time to an order of a few minutes, so that low-dose 3D reconstruction can be performed even in time-critical events. In this paper we introduce a novel parallel scheme that takes advantage of numerous block RAMs on field-programmable gate arrays (FPGAs). Also, an external memory bandwidth reduction strategy is presented to reuse both the sinogram and the voxel intensity. Moreover, a customized processing engine based on the FPGA is presented to increase overall throughput while reducing the logic consumption. Finally, a hardware and software flow is proposed to quickly construct a design for various CT machines. The complete reconstruction system is implemented on an FPGA-based server-class node. Experiments on actual patient data show that a 26.9 × speedup can be achieved over a 16-thread multicore CPU implementation.

  5. Fast and Automatic Heart Isolation in 3D CT Volumes: Optimal Shape Initialization

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Vega-Higuera, Fernando; Zhou, Shaohua Kevin; Comaniciu, Dorin

    Heart isolation (separating the heart from the proximity tissues, e.g., lung, liver, and rib cage) is a prerequisite to clearly visualize the coronary arteries in 3D. Such a 3D visualization provides an intuitive view to physicians to diagnose suspicious coronary segments. Heart isolation is also necessary in radiotherapy planning to mask out the heart for the treatment of lung or liver tumors. In this paper, we propose an efficient and robust method for heart isolation in computed tomography (CT) volumes. Marginal space learning (MSL) is used to efficiently estimate the position, orientation, and scale of the heart. An optimal mean shape (which optimally represents the whole shape population) is then aligned with detected pose, followed by boundary refinement using a learning-based boundary detector. Post-processing is further exploited to exclude the rib cage from the heart mask. A large-scale experiment on 589 volumes (including both contrasted and non-contrasted scans) from 288 patients demonstrates the robustness of the approach. It achieves a mean point-to-mesh error of 1.91 mm. Running at a speed of 1.5 s/volume, it is at least 10 times faster than the previous methods.

  6. Inter-observer reliability of measurements performed on digital long-leg standing radiographs and assessment of validity compared to 3D CT-scan.

    PubMed

    Boonen, B; Kerens, B; Schotanus, M G M; Emans, P; Jong, B; Kort, N P

    2016-01-01

    Long-leg radiographs (LLR) are often used in orthopaedics to assess limb alignment in patients undergoing total knee arthroplasty (TKA). However, there are still concerns about the adequacy of measurements performed on LLR. We assessed the reliability and validity of measurements on LLR using three-dimensional computed tomography (3D CT)-scan as a gold standard. Six different surgeons measured the mechanical axis and position of the femoral and tibial components individually on 24 LLR. Intraclass correlation coefficients (ICC) were calculated to obtain reliability and Bland-Altman plots were constructed to assess agreement between measurements on LLR and measurements on 3D CT-scan. ICC agreement for the six observer measurements on LLR was 0.70 for the femoral component and 0.80 for the tibial component. The mean difference between measurements performed on LLR and 3D CT-scan was 0.3° for the femoral component and -1.1° for the tibial component. Variation of the difference between LLR and 3D CT-scan for the femoral component was 1.1° and 0.9° for the tibial component. 95% of the differences between measurements performed on LLR and 3D CT-scan were between -1.9 and 2.4° (femoral component) and between -2.9 and 0.7 (tibial component). Measurements on LLR show moderate to good reliability and, when compared to 3D CT-scan, show good validity. institutional review board Atrium-Orbis-Zuyd, number: 11-T-15. Prospective cohort study, Level II. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Fast intra-operative non-linear registration of 3D-CT to tracked, selected 2D-ultrasound slices

    NASA Astrophysics Data System (ADS)

    Olesch, Janine; Beuthien, Björn; Heldmann, Stefan; Papenberg, Nils; Fischer, Bernd

    2011-03-01

    In navigated liver surgery it is an important task to align intra-operative data to pre-operative planning data. This work describes a method to register pre-operative 3D-CT-data to tracked intra-operative 2D US-slices. Instead of reconstructing a 3D-volume out of the two-dimensional US-slice sequence we directly apply the registration scheme to the 2D-slices. The advantage of this approach is manyfold. We circumvent the time consuming compounding process, we use only known information, and the complexity of the scheme reduces drastically. As the liver is a non-rigid organ, we apply non-linear techniques to take care of deformations occurring during the intervention. During the surgery, computing time is a crucial issue. As the complexity of the scheme is proportional to the number of acquired slices, we devise a scheme which starts out by selecting a few "key-slices" to be used in the non-linear registration scheme. This step is followed by multi-level/multi-scale strategies and fast optimization techniques. In this abstract we briefly describe the new method and show first convincing results.

  8. Image Processing

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Images are prepared from data acquired by the multispectral scanner aboard Landsat, which views Earth in four ranges of the electromagnetic spectrum, two visible bands and two infrared. Scanner picks up radiation from ground objects and converts the radiation signatures to digital signals, which are relayed to Earth and recorded on tape. Each tape contains "pixels" or picture elements covering a ground area; computerized equipment processes the tapes and plots each pixel, line be line to produce the basic image. Image can be further processed to correct sensor errors, to heighten contrast for feature emphasis or to enhance the end product in other ways. Key factor in conversion of digital data to visual form is precision of processing equipment. Jet Propulsion Laboratory prepared a digital mosaic that was plotted and enhanced by Optronics International, Inc. by use of the company's C-4300 Colorwrite, a high precision, high speed system which manipulates and analyzes digital data and presents it in visual form on film. Optronics manufactures a complete family of image enhancement processing systems to meet all users' needs. Enhanced imagery is useful to geologists, hydrologists, land use planners, agricultural specialists geographers and others.

  9. Image Processing

    DTIC Science & Technology

    1999-03-01

    blurs the processed image. Blurring is the primary limitation of low-pass filtering. Figure (10) shows a photo of the famous Taj -Hahal, one of the...Original Histo(p"am FILENAME.APP=41 06FG 1 O.PSD APPLICATION: ADOBE PHOTOSHOP VERSION 4.0 Figure (10) Photo ofTaj- Mahal with arbitrarily noise

  10. The "sagging rope sign" in avascular necrosis in children's hip diseases--confirmation by 3D CT studies.

    PubMed

    Kim, H T; Eisenhauer, E; Wenger, D R

    1995-01-01

    Growth disturbance of the proximal femoral epiphysis and physis secondary to avascular necrosis (AVN) in a variety of children's hip disorders produces changes in the femoral head and neck that make radiographic interpretation difficult. The enlarged overhanging femoral head produces radiographic markings on the femoral neck which are sometimes confusing. These have sometimes been misinterpreted as growth arrest lines. Apley and Wientroub reintroduced Perkins' description of the "sagging rope" sign in AVN of the femoral head, and Clarke clarified that this puzzling radiographic transverse metaphyseal line overlying the femoral neck in fact represents the margin of the femoral head rather than a growth arrest line. Their report was made after studying plain and stereoscopic radiographs alone. Our review of 23 cases of femoral head AVN in children, documented by 3 dimensional computerized tomographic (3D CT) radiographs of the femoral head and pelvis, confirms Clarke's view of the nature of the "sagging rope" sign. These sophisticated radiographic studies provide new detail and understanding of head-neck relationship in AVN which allows better planning for surgical correction of hip disorders in children.

  11. Mapping motion from 4D-MRI to 3D-CT for use in 4D dose calculations: A technical feasibility study

    SciTech Connect

    Boye, Dirk; Lomax, Tony; Knopf, Antje

    2013-06-15

    Purpose: Target sites affected by organ motion require a time resolved (4D) dose calculation. Typical 4D dose calculations use 4D-CT as a basis. Unfortunately, 4D-CT images have the disadvantage of being a 'snap-shot' of the motion during acquisition and of assuming regularity of breathing. In addition, 4D-CT acquisitions involve a substantial additional dose burden to the patient making many, repeated 4D-CT acquisitions undesirable. Here the authors test the feasibility of an alternative approach to generate patient specific 4D-CT data sets. Methods: In this approach motion information is extracted from 4D-MRI. Simulated 4D-CT data sets [which the authors call 4D-CT(MRI)] are created by warping extracted deformation fields to a static 3D-CT data set. The employment of 4D-MRI sequences for this has the advantage that no assumptions on breathing regularity are made, irregularities in breathing can be studied and, if necessary, many repeat imaging studies (and consequently simulated 4D-CT data sets) can be performed on patients and/or volunteers. The accuracy of 4D-CT(MRI)s has been validated by 4D proton dose calculations. Our 4D dose algorithm takes into account displacements as well as deformations on the originating 4D-CT/4D-CT(MRI) by calculating the dose of each pencil beam based on an individual time stamp of when that pencil beam is applied. According to corresponding displacement and density-variation-maps the position and the water equivalent range of the dose grid points is adjusted at each time instance. Results: 4D dose distributions, using 4D-CT(MRI) data sets as input were compared to results based on a reference conventional 4D-CT data set capturing similar motion characteristics. Almost identical 4D dose distributions could be achieved, even though scanned proton beams are very sensitive to small differences in the patient geometry. In addition, 4D dose calculations have been performed on the same patient, but using 4D-CT(MRI) data sets based on

  12. Laparoscopic resection aided by preoperative 3-D CT angiography for rectosigmoid colon cancer associated with a horseshoe kidney: A case report.

    PubMed

    Maeda, Yoshiaki; Shinohara, Toshiki; Nagatsu, Akihisa; Futakawa, Noriaki; Hamada, Tomonori

    2014-11-01

    We herein report a case of laparoscopic high anterior resection with D3 lymph node dissection for rectosigmoid colon cancer with a horseshoe kidney. A 65-year-old Japanese man referred to our hospital for rectosigmoid colon cancer was found to have a horseshoe kidney on a CT scan. On 3-D CT angiography, an aberrant renal artery was visualized feeding the renal isthmus that arises from the aorta just below the root of the inferior mesenteric artery (IMA). Laparoscopic anterior rectal resection with D3 lymph node dissection was performed. During the operation, the IMA, left ureter, left gonadal vessels and hypogastric nerve plexus could be seen passing over the horseshoe kidney isthmus. With the aid of preoperative 3-D CT angiography, the root of the IMA was identified on the temporal side of the isthmus and divided safely just above the hypogastric nerve. As a horseshoe kidney is often accompanied by aberrant renal arteries and/or abnormal running of the ureter, 3-D CT angiography is useful for determining the location of these structures and avoiding intraoperative injury.

  13. Browsing Through Closed Books: Evaluation of Preprocessing Methods for Page Extraction of a 3-D CT Book Volume

    NASA Astrophysics Data System (ADS)

    Stromer, D.; Christlein, V.; Schön, T.; Holub, W.; Maier, A.

    2017-09-01

    It is often the case that a document can not be opened, page-turned or touched anymore due to damages caused by aging processes, moisture or fire. To counter this, special imaging systems can be used. One of our earlier work revealed that a common 3-D X-ray micro-CT scanner is well suited for imaging and reconstructing historical documents written with iron gall ink – an ink consisting of metallic particles. We acquired a volume of a self-made book without opening or page-turning with a single 3-D scan. However, when investigating the reconstructed volume, we faced the problem of a proper automatic extraction of single pages within the volume in an acceptable time without losing information of the writings. Within this work, we evaluate different appropriate pre-processing methods with respect to computation time and accuracy which are decisive for a proper extraction of book pages from the reconstructed X-ray volume and the subsequent ink identification. The different methods were tested for an extreme case with low resolution, noisy input data and wavy pages. Finally, we present results of the page extraction after applying the evaluated methods.

  14. 3D computed tomography of an unusual triple ended xiphoid process.

    PubMed

    Mosca, Heather; Dross, Peter

    2012-03-01

    The sternum is the site of frequent variations and anomalies. Knowledge of the plain film and CT appearance of these variations and anomalies is useful in differentiating from pathologic conditions and in surgical planning. We present a rare case of an unusual triple ended xiphoid process with its plain film and 3D CT volume rendered reconstructed imaging.

  15. Digital image processing.

    PubMed

    Seeram, Euclid

    2004-01-01

    Digital image processing is now commonplace in radiology, nuclear medicine and sonography. This article outlines underlying principles and concepts of digital image processing. After completing this article, readers should be able to: List the limitations of film-based imaging. Identify major components of a digital imaging system. Describe the history and application areas of digital image processing. Discuss image representation and the fundamentals of digital image processing. Outline digital image processing techniques and processing operations used in selected imaging modalities. Explain the basic concepts and visualization tools used in 3-D and virtual reality imaging. Recognize medical imaging informatics as a new area of specialization for radiologic technologists.

  16. Image processing in astronomy

    NASA Astrophysics Data System (ADS)

    Berry, Richard

    1994-04-01

    Today's personal computers are more powerful than the mainframes that processed images during the early days of space exploration. We have entered an age in which anyone can do image processing. Topics covering the following aspects of image processing are discussed: digital-imaging basics, image calibration, image analysis, scaling, spatial enhancements, and compositing.

  17. IS 3D-CT REFORMATION USING FREE SOFTWARE APPLICABLE TO DIAGNOSIS OF BONE CHANGES IN MANDIBULAR CONDYLES?

    PubMed Central

    de Oliveira, Marília Gerhardt; Morais, Luciano Engelmann; Silva, Daniela Nascimento; de Oliveira, Helena Willhelm; Heitz, Cláiton; Gaião, Lêonilson

    2009-01-01

    Objectives: This study evaluated the agreement of computed tomography (CT) imaging using 3D reformations (3DR) with shaded surface display (SSD) and maximum intensity projection (MIP) in the diagnosis of bone changes in mandibular condyles of patients with rheumatoid arthritis (RA), and compared findings with multiplanar reformation (MPR) images, used as the criterion standard. Material and Methods: Axial CT images of 44 temporomandibular joints (TMJs) of 22 patients with RA were used. Images were recorded in DICOM format and assessed using free software (ImageJ). Each sample had its 3DR-SSD and 3DR-MIP results compared in pairs with the MPR results. Results: Slight agreement (k = 0.0374) was found in almost all comparisons. The level of agreement showed that 3DR-SSD and 3DR-MIP yielded a number of false-negative results that was statistically significant when compared with MPR. Conclusions: 3DR-SSD or 3DR-MIP should only be used as adjuvant techniques to MPR in the diagnosis of bone changes in mandibular condyles. PMID:19466245

  18. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  19. Digital image processing

    NASA Technical Reports Server (NTRS)

    Bernstein, R.; Ferneyhough, D. G., Jr.

    1975-01-01

    The Federal Systems Division of IBM has developed an image processing facility to experimentally process, view, and record digital image data. This facility has been used to support LANDSAT digital image processing investigations and advanced image processing research and development. A brief description of the facility is presented, some techniques that have been developed to correct the image data are discussed, and some results obtained by users of the facility are described.

  20. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  1. 3D CT-based high-dose-rate breast brachytherapy implants: treatment planning and quality assurance.

    PubMed

    Das, Rupak K; Patel, Rakesh; Shah, Hiral; Odau, Heath; Kuske, Robert R

    2004-07-15

    Although accelerated partial breast irradiation (APBI) as the sole radiation modality after lumpectomy has shown promising results for select breast cancer patients, published experiences thus far have provided limited information on treatment planning methodology and quality assurance measures. A novel three-dimensional computed tomography (CT)-based treatment planning method for accurate delineation and geometric coverage of the target volume is presented. A correlation between treatment volume and irradiation time has also been studied for quality assurance purposes. Between May 2002 and January 2003, 50 consecutive patients underwent an image-guided interstitial implant followed by CT-based treatment planning and were subsequently treated with APBI with a high-dose-rate (HDR) brachytherapy remote afterloader. Target volume was defined as the lumpectomy cavity +2 cm margin modified to >/=5 mm to the skin surface. Catheter reconstruction, geometric optimization, and manual adjustment of irradiation time were done to optimally cover the target volume while minimizing hot spots. Dose homogeneity index (DHI) and percent of target volume receiving 100% of the prescription dose (32 Gy in 8 fractions or 34 Gy in 10 fractions) was determined. Additionally, the correlation between the treatment volume and irradiation time, source strength, and dose was then analyzed for manual verification of the HDR computer calculation. In all cases, the lumpectomy cavity was covered 100%. Target volume coverage was excellent with a median of 96%, and DHI had a median value of 0.7. For each plan, source strength times the treatment time for every unit of prescribed dose had an excellent agreement of +/-7% to the Manchester volume implant table corrected for modern units. CT-based treatment planning allowed excellent visualization of the lumpectomy cavity and normal structures, thereby improving target volume delineation and optimal coverage, relative to conventional orthogonal film

  2. Hip dysplasia, pelvic obliquity, and scoliosis in cerebral palsy: a qualitative analysis using 3D CT reconstruction

    NASA Astrophysics Data System (ADS)

    Russ, Mark D.; Abel, Mark F.

    1998-06-01

    Five patients with cerebral palsy, hip dysplasia, pelvic obliquity, and scoliosis were evaluated retrospectively using three dimensional computed tomography (3DCT) scans of the proximal femur, pelvis, and lumbar spine to qualitatively evaluate their individual deformities by measuring a number of anatomical landmarks. Three dimensional reconstructions of the data were visualized, analyzed, and then manipulated interactively to perform simulated osteotomies of the proximal femur and pelvis to achieve surgical correction of the hip dysplasia. Severe deformity can occur in spastic cerebral palsy, with serious consequences for the quality of life of the affected individuals and their families. Controversy exists regarding the type, timing and efficacy of surgical intervention for correction of hip dysplasia in this population. Other authors have suggested 3DCT studies are required to accurately analyze acetabular deficiency, and that this data allows for more accurate planning of reconstructive surgery. It is suggested here that interactive manipulation of the data to simulate the proposed surgery is a clinically useful extension of the analysis process and should also be considered as an essential part of the pre-operative planning to assure that the appropriate procedure is chosen. The surgical simulation may reduce operative time and improve surgical correction of the deformity.

  3. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  4. Hyperspectral image processing methods

    USDA-ARS?s Scientific Manuscript database

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  5. Biomedical image processing.

    PubMed

    Huang, H K

    1981-01-01

    Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed. Two current federally funded research projects in heart imaging, the dynamic spatial reconstructor and the dynamic cardiac three-dimensional densitometer, should bring some fruitful results in the near future. Miscrosopic imaging technique is very different from the radiological imaging technique in the sense that

  6. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  7. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  8. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  9. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  10. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  11. Image Processing Research

    DTIC Science & Technology

    1976-09-30

    Processing Institute University of Southern California University Park Los Angeles, California 90007 Sponsored by Advanced Research Projects Agency...Image Processing Institute University of Southern California University Park Los Angeles, California 90007 Accession For "mis GRA&l DTIC... Park Los Angeles, California 90007 ’«.’J. He »-’OH r ircuHiTr CLASJIUCATION UNCLASSIFIED 2ö. chouc 3 NEPOH f TITLE IMAGE PROCESSING

  12. Image Processing Software

    NASA Astrophysics Data System (ADS)

    Bosio, M. A.

    1990-11-01

    ABSTRACT: A brief description of astronomical image software is presented. This software was developed in a Digital Micro Vax II Computer System. : St presenta una somera descripci6n del software para procesamiento de imagenes. Este software fue desarrollado en un equipo Digital Micro Vax II. : DATA ANALYSIS - IMAGE PROCESSING

  13. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  14. Image processing occupancy sensor

    DOEpatents

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  15. Quantum image processing?

    NASA Astrophysics Data System (ADS)

    Mastriani, Mario

    2017-01-01

    This paper presents a number of problems concerning the practical (real) implementation of the techniques known as quantum image processing. The most serious problem is the recovery of the outcomes after the quantum measurement, which will be demonstrated in this work that is equivalent to a noise measurement, and it is not considered in the literature on the subject. It is noteworthy that this is due to several factors: (1) a classical algorithm that uses Dirac's notation and then it is coded in MATLAB does not constitute a quantum algorithm, (2) the literature emphasizes the internal representation of the image but says nothing about the classical-to-quantum and quantum-to-classical interfaces and how these are affected by decoherence, (3) the literature does not mention how to implement in a practical way (at the laboratory) these proposals internal representations, (4) given that quantum image processing works with generic qubits, this requires measurements in all axes of the Bloch sphere, logically, and (5) among others. In return, the technique known as quantum Boolean image processing is mentioned, which works with computational basis states (CBS), exclusively. This methodology allows us to avoid the problem of quantum measurement, which alters the results of the measured except in the case of CBS. Said so far is extended to quantum algorithms outside image processing too.

  16. Processing Of Binary Images

    NASA Astrophysics Data System (ADS)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  17. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  18. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  19. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  20. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  1. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  2. Image Processing Research

    DTIC Science & Technology

    1975-09-30

    Technical Journal, Vol. 36, pp. 653-709, May 1957. -50- 4. Image Restoration anJ Enhdikcement Projects Imaje restoration ani image enhancement are...n (9K =--i_ (9) -sn =0- 2. where o is the noise energy ani I is an identity matrix. n Color Imaje Scanner Calibration: A common problem in the...line of the imaje , and >at. The statistics cf the process N(k) can now be given in terms of the statistics of m , 8 2 , and the sequence W= (cLe (5

  3. Image processing techniques for acoustic images

    NASA Astrophysics Data System (ADS)

    Murphy, Brian P.

    1991-06-01

    The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge detection and segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering.

  4. Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images

    NASA Astrophysics Data System (ADS)

    Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2006-03-01

    We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.

  5. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  6. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  7. Image processing in planetology

    NASA Astrophysics Data System (ADS)

    Fulchignoni, M.; Picchiotti, A.

    The authors summarize the state of art in the field of planetary image processing in terms of available data, required procedures and possible improvements. More than a technical description of the adopted algorithms, that are considered as the normal background of any research activity dealing with interpretation of planetary data, the authors outline the advances in planetology achieved as a consequence of the availability of better data and more sophisticated hardware. An overview of the available data base and of the organizational efforts to make the data accessible and updated constitutes a valuable reference for those people interested in getting information. A short description of the processing sequence, illustrated by an example which shows the quality of the obtained products and the improvement in each successive step of the processing procedure gives an idea of the possible use of this kind of information.

  8. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  9. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  10. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  11. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  12. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  13. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  14. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  15. Image processing and recognition for biological images.

    PubMed

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  16. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  17. Image Processing Occupancy Sensor

    SciTech Connect

    2016-07-14

    The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only on motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.

  18. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  19. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  20. Filter for biomedical imaging and image processing.

    PubMed

    Mondal, Partha P; Rajan, K; Ahmad, Imteyaz

    2006-07-01

    Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

  1. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  2. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  3. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  4. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  5. Estimation of three-dimensional knee joint movement using bi-plane x-ray fluoroscopy and 3D-CT

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Fujita, Satoshi; Kohno, Takahiro; Suzuki, Masahiko; Miyagi, Jin; Moriya, Hideshige

    2005-04-01

    Acquisition of exact information of three-dimensional knee joint movement is desired in plastic surgery. Conventional X-ray fluoroscopy provides dynamic but just two-dimensional projected image. On the other hand, three-dimensional CT provides three-dimensional but just static image. In this paper, a method for acquiring three-dimensional knee joint movement using both bi-plane, dynamic X-ray fluoroscopy and static three-dimensional CT is proposed. Basic idea is use of 2D/3D registration using digitally reconstructed radiograph (DRR) or virtual projection of CT data. Original ideal is not new but the application of bi-plane fluoroscopy to natural bones of knee is reported for the first time. The technique was applied to two volunteers and successful results were obtained. Accuracy evaluation through computer simulation and phantom experiment with a knee joint of a pig were also conducted.

  6. Eye Redness Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Adnan, M. R. H. Mohd; Zain, Azlan Mohd; Haron, Habibollah; Alwee, Razana; Zulfaezal Che Azemin, Mohd; Osman Ibrahim, Ashraf

    2017-09-01

    The use of photographs for the assessment of ocular conditions has been suggested to further standardize clinical procedures. The selection of the photographs to be used as scale reference images was subjective. Numerous methods have been proposed to assign eye redness scores by computational methods. Image analysis techniques have been investigated over the last 20 years in an attempt to forgo subjective grading scales. Image segmentation is one of the most important and challenging problems in image processing. This paper briefly outlines the comprehensive of image processing and the implementation of image segmentation in eye redness.

  7. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  8. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  9. Overlooked physical diagnoses in chronic pain patients involved in litigation, Part 2. The addition of MRI, nerve blocks, 3-D CT, and qualitative flow meter.

    PubMed

    Hendler, N; Bergson, C; Morrison, C

    1996-01-01

    This study followed 120 chronic pain patients referred to a multidisciplinary pain center. The referral diagnosis for many patients, such as "chronic pain," "psychogenic pain," or "lumbar strain," was frequently found to be incomplete or inaccurate (40%) following a multidisciplinary evaluation that used appropriate diagnostic studies, including magnetic resonance imaging, computed tomography, nerve blocks, and qualitative flowmeter. Significant abnormalities were discovered in 76% of the diagnostic tests. An organic origin for pain was found in 98% of these patients. The patients were discharged with objective verification of diagnoses including facet disease, nerve entrapment, temporomandibular joint disease, thoracic outlet syndrome, and herniated discs.

  10. Photographic image enhancement and processing

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1977-01-01

    Scientists using aerial imagery frequently desire image processing or enhancement of that imagery to aid them in data analysis. Sophisticated digital image processing techniques are currently employed in many applications where the data is recorded in digital format, where processing hardware and programs are available. Aerial photographic imagery poses a problem in the magnitude of the digitization processing. Photographic image processing analogous to many available digital techniques is being employed by scientific investigators. Those techniques which may be applied in a cost effective manner to processing of aerial photographic imagery are described here.

  11. Cam deformity and the omega angle, a novel quantitative measurement of femoral head-neck morphology: a 3D CT gender analysis in asymptomatic subjects.

    PubMed

    Mascarenhas, Vasco V; Rego, Paulo; Dantas, Pedro; Gaspar, Augusto; Soldado, Francisco; Consciência, José G

    2017-05-01

    Our objectives were to use 3D computed tomography (CT) to define head-neck morphologic gender-specific and normative parameters in asymptomatic individuals and use the omega angle (Ω°) to provide quantification data on the location and radial extension of a cam deformity. We prospectively included 350 individuals and evaluated 188 asymptomatic hips that underwent semiautomated CT analysis. Different thresholds of alpha angle (α°) were considered in order to analyze cam morphology and determine Ω°. We calculated overall and gender-specific parameters for imaging signs of cam morphology (Ω° and circumferential α°). The 95 % reference interval limits were beyond abnormal thresholds found in the literature for cam morphology. Specifically, α° at 3/1 o´clock were 46.9°/60.8° overall, 51.8°/65.4° for men and 45.7°/55.3° for women. Cam prevalence, magnitude, location, and epicenter were significantly gender different. Increasing α° correlated with higher Ω°, meaning that higher angles correspond to larger cam deformities. Hip morphometry measurements in this cohort of asymptomatic individuals extended beyond current thresholds used for the clinical diagnosis of cam deformity, and α° was found to vary both by gender and measurement location. These results suggest that α° measurement is insufficient for the diagnosis of cam deformity. Enhanced morphometric evaluation, including 3D imaging and Ω°, may enable a more accurate diagnosis. • 95% reference interval limits of cam morphotype were beyond currently defined thresholds. • Current morphometric definitions for cam-type morphotype should be applied with care. • Cam prevalence, magnitude, location, and epicenter are significantly gender different. • Cam and alpha angle thresholds should be defined according to sex/location. • Quantitative 3D morphometric assessment allows thorough and reproducible FAI diagnosis and monitoring.

  12. Industrial Applications of Image Processing

    NASA Astrophysics Data System (ADS)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  13. [Imaging center - optimization of the imaging process].

    PubMed

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams.

  14. Image processing with COSMOS

    NASA Astrophysics Data System (ADS)

    Stobie, R. S.; Dodd, R. J.; MacGillivray, H. T.

    1981-12-01

    It is noted that astronomers have for some time been fascinated by the possibility of automatic plate measurement and that measuring engines have been constructed with an ever increasing degree of automation. A description is given of the COSMOS (CoOrdinates, Sizes, Magnitudes, Orientations, and Shapes) system at the Royal Observatory in Edinburgh. An automatic high-speed microdensitometer controlled by a minicomputer is linked to a very fast microcomputer that performs immediate image analysis. The movable carriage, whose position in two coordinates is controlled digitally to an accuracy of 0.5 micron (0.0005 mm) will take plates as large as 356 mm on a side. It is noted that currently the machine operates primarily in the Image Analysis Mode, in which COSMOS must first detect the presence of an image. It does this by scanning and digitizing the photograph in 'raster' fashion and then searching for local enhancements in the density of the exposed emulsion.

  15. Statistical Image Processing.

    DTIC Science & Technology

    1982-11-16

    spectral analysist texture image analysis and classification, __ image software package, automatic spatial clustering.ITWA domenit hi ba apa for...ICOLOR(256),IBW(256) 1502 FORMATO (30( CNO(N): fF12.1)) 1503 FORMAT(o *FMINo DMRGE:0f2E20.8) 1504 FORMAT(/o IMRGE:or15) 1505 FOR14ATV FIRST SUBIMAGE:v...1506 FORMATO ’ JOIN CLUSTER NL:0) 1507 FORMAT( NEW CLUSTER:O) 1508 FORMAT( LLBS.GE.600) 1532 FORMAT(15XoTHETA ,7X, SIGMA-SQUAREr3Xe MERGING-DISTANCE

  16. Trends In Microcomputer Image Processing

    NASA Astrophysics Data System (ADS)

    Strum, William E.

    1988-05-01

    We have seen, in the last four years, the microcomputer become the platform of choice for many image processing applications. By 1991, Frost and Sullivan forecasts that 75% of all image processing will be carried out on microcomputers. Many factors have contributed to this trend and will be discussed in the following paper.

  17. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  18. Industrial applications of process imaging and image processing

    NASA Astrophysics Data System (ADS)

    Scott, David M.; Sunshine, Gregg; Rosen, Lou; Jochen, Ed

    2001-02-01

    Process imaging is the art of visualizing events inside closed industrial processes. Image processing is the art of mathematically manipulating digitized images to extract quantitative information about such processes. Ongoing advances in camera and computer technology have made it feasible to apply these abilities to measurement needs in the chemical industry. To illustrate the point, this paper describes several applications developed at DuPont, where a variety of measurements are based on in-line, at-line, and off-line imaging. Application areas include compounding, melt extrusion, crystallization, granulation, media milling, and particle characterization. Polymer compounded with glass fiber is evaluated by a patented radioscopic (real-time X-ray imaging) technique to measure concentration and dispersion uniformity of the glass. Contamination detection in molten polymer (important for extruder operations) is provided by both proprietary and commercial on-line systems. Crystallization in production reactors is monitored using in-line probes and flow cells. Granulation is controlled by at-line measurements of granule size obtained from image processing. Tomographic imaging provides feedback for improved operation of media mills. Finally, particle characterization is provided by a robotic system that measures individual size and shape for thousands of particles without human supervision. Most of these measurements could not be accomplished with other (non-imaging) techniques.

  19. [Autogenous bone versus beta-tricalcium phosphate graft alone for bilateral sinus elevations (2-3D CT, histologic and histomorphometric evaluations)].

    PubMed

    Németh, Zsolt; Suba, Zsuzsanna; Hrabák, Károly; Barabás, József; Szabó, György

    2002-06-23

    When the maxilla is edentulous and the alveolar process is extensively absorbed, a dental root can be implanted only after the implantation of bone or a bone-substitute. Only in this way can the subjective and objective negative features associated with a removable prosthesis be avoided. Many forms of bone-substitutes are known. Freely taken bone from the patient generally serves as the gold standard for the classification of bone-substitutes. The aim of our work was to compare two materials (the patient's own bone and beta-tricalcium phosphate) in the same patient. Ten patients were selected who for some reason did not want or could not wear a removable prosthesis. The maxilla was so atrophied that bone or bone-substitute implantation was necessary before the dental root could be implanted. The maxilla had to be elevated from inside (sinus elevation) and thickened from outside (onlay-plasty). Bone was taken in the usual manner from the hipbone. For the internal elevation, such autogenous bone was utilized on one side, and beta-tricalcium phosphate granulate on the other. The formation of new bone and the rate of bone formation were followed by clinical methods and by radiological, histological, and histomorphometric examinations. The implantation succeeded clinically in all ten patients: one year later they all received a fixed bridge. The radiological and histological examinations demonstrated good bone formation on both sides. As concerns the rate of formation of new bone, there was practically no difference after the implantation of autogenous bone or beta-tricalcium phosphate. This study has therefore provided further evidence that, when certain bone deficiencies are to be eliminated, the unpleasant phenomena accompanying the removal of the patient's own bone can be avoided through the use of new synthetic materials. Accordingly, when comparing the present results with the findings of other authors, beta-tricalcium phosphate may be considered a good graft

  20. Image processing: some challenging problems.

    PubMed Central

    Huang, T S; Aizawa, K

    1993-01-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing. PMID:8234312

  1. Image Processing: Some Challenging Problems

    NASA Astrophysics Data System (ADS)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  2. Image processing for optical mapping.

    PubMed

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  3. Image Processing REST Web Services

    DTIC Science & Technology

    2013-03-01

    collections, deblurring, contrast enhancement, and super resolution. 2 1. Original Image with Target Chip to Super Resolve 2. Unenhanced...extracted target chip 3. Super-resolved target chip 4. Super-resolved, deblurred target chip 5. Super-resolved, deblurred and contrast...enhanced target chip Image 1. Chaining the image processing algorithms. 3 2. Resources There are two types of resources associated with these

  4. Morphological image processing techniques in thermographic imaging.

    PubMed

    Schulze, M A; Pearce, J A

    1993-01-01

    Mathematical morphology is a set algebra that defines some important new techniques in image processing. Morphological filters are closely related to order statistic and other nonlinear filters, but they are uniquely sensitive to shape. A morphological filter will preserve shapes similar to its structuring element shape while modifying dissimilar shapes. Most morphological filters are effective at removing both linear and nonlinear noise processes. However, the standard morphological operators introduce a statistical and deterministic bias to images. Fortunately, these operators exist in complementary pairs that are equally and oppositely biased. One way to alleviate the bias is to average the two complementary operators. The filters formed by such averages are the midrange filter (basic operators), the pseudomedian filter (singly compound operators) and the LOCO filter (doubly compound operators). In thermographic imaging, one often wishes to find exact temperatures or accurate isothermal contours. Therefore, techniques used to remove sensor noise and scanning artifact should not introduce bias. The LOCO filter that we have devised provides the shape control and noise suppression of morphological techniques without biasing the image. We will demonstrate the effects of different structuring element shapes on thermographic images of tissue heated by laser irradiation and electrosurgery.

  5. SOFT-1: Imaging Processing Software

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Five levels of image processing software are enumerated and discussed: (1) logging and formatting; (2) radiometric correction; (3) correction for geometric camera distortion; (4) geometric/navigational corrections; and (5) general software tools. Specific concerns about access to and analysis of digital imaging data within the Planetary Data System are listed.

  6. Photographic image enhancement and processing

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1975-01-01

    Image processing techniques (computer and photographic) are described which are used within the JSC Photographic Technology Division. Two purely photographic techniques used for specific subject isolation are discussed in detail. Sample imagery is included.

  7. Update on three-dimensional image reconstruction for preoperative simulation in thoracic surgery

    PubMed Central

    Chen-Yoshikawa, Toyofumi F.

    2016-01-01

    Background Three-dimensional computed tomography (3D-CT) technologies have been developed and refined over time. Recently, high-speed and high-quality 3D-CT technologies have also been introduced to the field of thoracic surgery. The purpose of this manuscript is to demonstrate several examples of these 3D-CT technologies in various scenarios in thoracic surgery. Methods A newly-developed high-speed and high-quality 3D image analysis software system was used in Kyoto University Hospital. Simulation and/or navigation were performed using this 3D-CT technology in various thoracic surgeries. Results Preoperative 3D-CT simulation was performed in most patients undergoing video-assisted thoracoscopic surgery (VATS). Anatomical variation was frequently detected preoperatively, which was useful in performing VATS procedures when using only a monitor for vision. In sublobar resection, 3D-CT simulation was more helpful. In small lung lesions, which were supposedly neither visible nor palpable, preoperative marking of the lesions was performed using 3D-CT simulation, and wedge resection or segmentectomy was successfully performed with confidence. This technique also enabled virtual-reality endobronchial ultrasonography (EBUS), which made the procedure more safe and reliable. Furthermore, in living-donor lobar lung transplantation (LDLLT), surgical procedures for donor lobectomy were simulated preoperatively by 3D-CT angiography, which also affected surgical procedures for recipient surgery. New surgical techniques such as right and left inverted LDLLT were also established using 3D models created with this technique. Conclusions After the introduction of 3D-CT technology to the field of thoracic surgery, preoperative simulation has been developed for various thoracic procedures. In the near future, this technique will become more common in thoracic surgery, and frequent use by thoracic surgeons will be seen in worldwide daily practice. PMID:27014477

  8. Sgraffito simulation through image processing

    NASA Astrophysics Data System (ADS)

    Guerrero, Roberto A.; Serón Arbeloa, Francisco J.

    2011-10-01

    This paper presents a tool for simulating the traditional Sgraffito technique through digital image processing. The tool is based on a digital image pile and a set of attributes recovered from the image at the bottom of the pile using the Streit and Buchanan multiresolution image pyramid. This technique tries to preserve the principles of artistic composition by means of the attributes of color, luminance and shape recovered from the foundation image. A couple of simulated scratching objects will establish how the recovered attributes have to be painted. Different attributes can be painted by using different scratching primitives. The resulting image will be a colorimetric composition reached from the image on the top of the pile, the color of the images revealed by scratching and the inner characteristics of each scratching primitive. The technique combines elements of image processing, art and computer graphics allowing users to make their own free compositions and providing a means for the development of visual communication skills within the user-observer relationship. The technique enables the application of the given concepts in non artistic fields with specific subject tools.

  9. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  10. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  11. Image processing using reconfigurable FPGAs

    NASA Astrophysics Data System (ADS)

    Ferguson, Lee

    1996-10-01

    The use of reconfigurable field-programmable gate arrays (FPGAs) for imaging applications show considerable promise to fill the gap that often occurs when digital signal processor chips fail to meet performance specifications. Single chip DSPs do not have the overall performance to meet the needs of many imaging applications, particularly in real-time designs. Using multiple DSPs to boost performance often presents major design challenges in maintaining data alignment and process synchronization. These challenges can impose serious cost, power consumption and board space penalties. Image processing requires manipulating massive amounts of data at high-speed. Although DSP chips can process data at high-speeds, their architectures can inhibit overall system performance in real-time imaging. The rate of operations can be increased when they are performed in dedicated hardware, such as special-purpose imaging devices and FPGAs, which provides the horsepower necessary to implement real-time image processing products successfully and cost-effectively. For many fixed applications, non-SRAM- based (antifuse or flash-based) FPGAs provide the raw speed to accomplish standard high-speed functions. However, in applications where algorithms are continuously changing and compute operations must be modified, only SRAM-based FPGAs give enough flexibility. The addition of reconfigurable FPGAs as a flexible hardware facility enables DSP chips to perform optimally. The benefits primarily stem from optimizing the hardware for the algorithms or the use of reconfigurable hardware to enhance the product architecture. And with SRAM-based FPGAs that are capable of partial dynamic reconfiguration, such as the Cache-Logic FPGAs from Atmel, continuous modification of data and logic is not only possible, it is practical as well. First we review the particular demands of image processing. Then we present various applications and discuss strategies for exploiting the capabilities of

  12. Integrating image processing in PACS.

    PubMed

    Faggioni, Lorenzo; Neri, Emanuele; Cerri, Francesca; Turini, Francesca; Bartolozzi, Carlo

    2011-05-01

    Integration of RIS and PACS services into a single solution has become a widespread reality in daily radiological practice, allowing substantial acceleration of workflow with greater ease of work compared with older generation film-based radiological activity. In particular, the fast and spectacular recent evolution of digital radiology (with special reference to cross-sectional imaging modalities, such as CT and MRI) has been paralleled by the development of integrated RIS--PACS systems with advanced image processing tools (either two- and/or three-dimensional) that were an exclusive task of costly dedicated workstations until a few years ago. This new scenario is likely to further improve productivity in the radiology department with reduction of the time needed for image interpretation and reporting, as well as to cut costs for the purchase of dedicated standalone image processing workstations. In this paper, a general description of typical integrated RIS--PACS architecture with image processing capabilities will be provided, and the main available image processing tools will be illustrated.

  13. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  14. Enhanced imaging process for xeroradiography

    NASA Astrophysics Data System (ADS)

    Fender, William D.; Zanrosso, Eddie M.

    1993-09-01

    An enhanced mammographic imaging process has been developed which is based on the conventional powder-toner selenium technology used in the Xerox 125/126 x-ray imaging system. The process is derived from improvements in the amorphous selenium x-ray photoconductor, the blue powder toner and the aerosol powder dispersion process. Comparisons of image quality and x-ray dose using the Xerox aluminum-wedge breast phantom and the Radiation Measurements Model 152D breast phantom have been made between the new Enhanced Process, the standard Xerox 125/126 System and screen-film at mammographic x-ray exposure parameters typical for each modality. When comparing the Enhanced Xeromammographic Process with the standard 125/126 System, a distinct advantage is seen for the Enhanced equivalent mass detection and superior fiber and speck detection. The broader imaging latitude of enhanced and standard Xeroradiography, in comparison to film, is illustrated in images made using the aluminum-wedge breast phantom.

  15. Learning the Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Jiang, Haomiao; Tian, Qiyuan; Farrell, Joyce; Wandell, Brian A.

    2017-10-01

    Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.

  16. Computer processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.

    1984-01-01

    In the past 20 years, a substantial amount of effort has been expended on the development of computer techniques for enhancement of X-ray images and for automated extraction of quantitative diagnostic information. The historical development of these methods is described. Illustrative examples are presented and factors influencing the relative success or failure of various techniques are discussed. Some examples of current research in radiographic image processing is described.

  17. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  18. Image processing of galaxy photographs

    NASA Technical Reports Server (NTRS)

    Arp, H.; Lorre, J.

    1976-01-01

    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  19. Image processing of galaxy photographs

    NASA Technical Reports Server (NTRS)

    Arp, H.; Lorre, J.

    1976-01-01

    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  20. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  1. Phase in Optical Image Processing

    NASA Astrophysics Data System (ADS)

    Naughton, Thomas J.

    2010-04-01

    The use of phase has a long standing history in optical image processing, with early milestones being in the field of pattern recognition, such as VanderLugt's practical construction technique for matched filters, and (implicitly) Goodman's joint Fourier transform correlator. In recent years, the flexibility afforded by phase-only spatial light modulators and digital holography, for example, has enabled many processing techniques based on the explicit encoding and decoding of phase. One application area concerns efficient numerical computations. Pushing phase measurement to its physical limits, designs employing the physical properties of phase have ranged from the sensible to the wonderful, in some cases making computationally easy problems easier to solve and in other cases addressing mathematics' most challenging computationally hard problems. Another application area is optical image encryption, in which, typically, a phase mask modulates the fractional Fourier transformed coefficients of a perturbed input image, and the phase of the inverse transform is then sensed as the encrypted image. The inherent linearity that makes the system so elegant mitigates against its use as an effective encryption technique, but we show how a combination of optical and digital techniques can restore confidence in that security. We conclude with the concept of digital hologram image processing, and applications of same that are uniquely suited to optical implementation, where the processing, recognition, or encryption step operates on full field information, such as that emanating from a coherently illuminated real-world three-dimensional object.

  2. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  3. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  4. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  5. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  6. Linear algebra and image processing

    NASA Astrophysics Data System (ADS)

    Allali, Mohamed

    2010-09-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty.

  7. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  8. Image processing for ink jet

    NASA Astrophysics Data System (ADS)

    Torpey, Peter A.

    1992-05-01

    The ink-jet marking process offers several unique opportunities for producing quality hard- copy images. There are, however, certain limitations and requirements of the technology that must be taken into account when developing image-processing procedures and algorithms for ink-jet printing systems. This paper describes a number of issues that set ink-jet apart from many of the other marking processes. For example, ink-jet can be treated as a truly 'binary' marking process. Thus, single isolated pixels are easily and reproducibly formed on the marking substrate. Halftoning procedures have been developed that take advantage of this attribute to produce more gray levels for a given resolution. Ink coverage on paper, however, must often be limited to < 200%. Also, the perceived color will be dependent on the order in which the colors are delivered to the marking substrate. Examples illustrating these and other concerns are given. Optimal image-processing procedures for the ink-jet marking process can be developed based on an understanding of these and other ink-jet specific issues.

  9. Image processing applications in NDE

    SciTech Connect

    Morris, R.A.

    1980-01-01

    Nondestructive examination (NDE) can be defined as a technique or collection of techniques that permits one to determine some property of a material or object without damaging the object. There are a large number of such techniques and most of them use visual imaging in one form or another. They vary from holographic interferometry where displacements under stress are measured to the visual inspection of an objects surface to detect cracks after penetrant has been applied. The use of image processing techniques on the images produced by NDE is relatively new and can be divided into three general categories: classical image enhancement; mensuration techniques; and quantitative sensitometry. An example is discussed of how image processing techniques are used to nondestructively and destructively test the product throughout its life cycle. The product that will be followed is the microballoon target used in the laser fusion program. The laser target is a small (50 to 100 ..mu..m - dia) glass sphere with typical wall thickness of 0.5 to 6 ..mu..m. The sphere may be used as is or may be given a number of coatings of any number of materials. The beads are mass produced by the millions and the first nondestructive test is to separate the obviously bad beads (broken or incomplete) from the good ones. After this has been done, the good beads must be inspected for spherocity and wall thickness uniformity. The microradiography of the glass, uncoated bead is performed on a specially designed low-energy x-ray machine. The beads are mounted in a special jig and placed on a Kodak high resolution plate in a vacuum chamber that contains the x-ray source. The x-ray image is made with an energy less that 2 keV and the resulting images are then inspected at a magnification of 500 to 1000X. Some typical results are presented.

  10. The magic of image processing

    NASA Astrophysics Data System (ADS)

    Sulentic, J. W.

    1984-05-01

    Digital technology has been used to improve enhancement techniques in astronomical image processing. Continuous tone variations in photographs are assigned density number (DN) values which are arranged in an array. DN locations are processed by computer and turned into pixels which form a reconstruction of the original scene on a television monitor. Digitized data can be manipulated to enhance contrast and filter out gross patterns of light and dark which obscure small scale features. Separate black and white frames exposed at different wavelengths can be digitized and processed individually, then recombined to produce a final image in color. Several examples of the use of the technique are provided, including photographs of spiral galaxy M33; four galaxies in Coma Berenices (NGC 4169, 4173, 4174, and 4175); and Stephens Quintet.

  11. Image processing in optical astronomy

    NASA Technical Reports Server (NTRS)

    Lorre, Jean J.

    1988-01-01

    Successful efforts to enhance optical-astronomy images through digital processing often exploit such 'weaknesses' of the image as the objects' near-symmetry, their preferred directionality, or a differentiation in spatial frequency between the object or objects and superimposed clutter. Attention is presently given to the calibration of a camera prior to astronomical data-acquisition, methods for the enhancement of faint surface brightness features, automated target detection and extraction techniques, the importance of the geometric transformations of digital imagery, the preparation of two-dimensional histograms, and the application of polarization.

  12. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  13. Turbine Blade Image Processing System

    NASA Astrophysics Data System (ADS)

    Page, Neal S.; Snyder, Wesley E.; Rajala, Sarah A.

    1983-10-01

    A vision system has been developed at North Carolina State University to identify the orientation and three dimensional location of steam turbine blades that are stacked in an industrial A-frame cart. The system uses a controlled light source for structured illumination and a single camera to extract the information required by the image processing software to calculate the position and orientation of a turbine blade in real time.

  14. General logarithmic image processing convolution.

    PubMed

    Palomares, Jose M; González, Jesús; Ros, Eduardo; Prieto, Alberto

    2006-11-01

    The logarithmic image processing model (LIP) is a robust mathematical framework, which, among other benefits, behaves invariantly to illumination changes. This paper presents, for the first time, two general formulations of the 2-D convolution of separable kernels under the LIP paradigm. Although both formulations are mathematically equivalent, one of them has been designed avoiding the operations which are computationally expensive in current computers. Therefore, this fast LIP convolution method allows to obtain significant speedups and is more adequate for real-time processing. In order to support these statements, some experimental results are shown in Section V.

  15. Image post-processing in dental practice.

    PubMed

    Gormez, Ozlem; Yilmaz, Hasan Huseyin

    2009-10-01

    Image post-processing of dental digital radiographs, a function which used commonly in dental practice is presented in this article. Digital radiography has been available in dentistry for more than 25 years and its use by dental practitioners is steadily increasing. Digital acquisition of radiographs enables computer-based image post-processing to enhance image quality and increase the accuracy of interpretation. Image post-processing applications can easily be practiced in dental office by a computer and image processing programs. In this article, image post-processing operations such as image restoration, image enhancement, image analysis, image synthesis, and image compression, and their diagnostic efficacy is described. In addition this article provides general dental practitioners with a broad overview of the benefits of the different image post-processing operations to help them understand the role of that the technology can play in their practices.

  16. Review of image processing fundamentals

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1985-01-01

    Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.

  17. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  18. Biomedical signal and image processing.

    PubMed

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro

    2011-01-01

    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  19. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  20. Framelet lifting in image processing

    NASA Astrophysics Data System (ADS)

    Lu, Da-Yong; Feng, Tie-Yong

    2010-08-01

    To obtain appropriate framelets in image processing, we often need to lift existing framelets. For this purpose the paper presents some methods which allow us to modify existing framelets or filters to construct new ones. The relationships of matrices and their eigenvalues which be used in lifting schemes show that the frame bounds of the lifted wavelet frames are optimal. Moreover, the examples given in Section 4 indicate that the lifted framelets can play the roles of some operators such as the weighted average operator, the Sobel operator and the Laplacian operator, which operators are often used in edge detection and motion estimation applications.

  1. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  2. Image processing technique for arbitrary image positioning in holographic stereogram

    NASA Astrophysics Data System (ADS)

    Kang, Der-Kuan; Yamaguchi, Masahiro; Honda, Toshio; Ohyama, Nagaaki

    1990-12-01

    In a one-step holographic stereogram, if the series of original images are used just as they are taken from perspective views, three-dimensional images are usually reconstructed in back of the hologram plane. In order to enhance the sense of perspective of the reconstructed images and minimize blur of the interesting portions, we introduce an image processing technique for making a one-step flat format holographic stereogram in which three-dimensional images can be observed at an arbitrary specified position. Experimental results show the effect of the image processing. Further, we show results of a medical application using this image processing.

  3. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  4. Processing of medical images using Maple

    NASA Astrophysics Data System (ADS)

    Toro Betancur, V.

    2013-05-01

    Maple's Image Tools package was used to process medical images. The results showed clearer images and records of its intensities and entropy. The medical images of a rhinocerebral mucormycosis patient, who was not early diagnosed, were processed and analyzed using Maple's tools, which showed, in a clearer way, the affected parts in the perinasal cavities.

  5. Differential operator approach for Fourier image processing.

    PubMed

    Núñez, Ismael; Ferrari, José A

    2007-08-01

    We present a differential operator approach for Fourier image processing. We demonstrate that when the mask in the processor Fourier plane is an analytical function, it can be described by means of a differential operator that acts directly on the input field to give the processed output image. In many cases (e.g., Schlieren imaging) this approach simplifies the calculations, which usually involve the evaluation of convolution integrals, and gives a new insight into the image-processing procedure.

  6. Information Processing in Medical Imaging Meeting (IPMI)

    DTIC Science & Technology

    1993-09-30

    Information Processing in Medical Imaging - Meeting (IPMI) F49620-93-1-0352 6. AUTHOR(S) Professor Harrison H. Barrett 7. PERFORMING ORGANIZATION NAME(S) AND...distribution unlimited. Final Report of 1993 Information Processing in Medical Imaging Meeting The 1993 Information Processing in Medical Imaging (IPMI...that the extracted information is correct? Although the emphasis of the meeting was clearly on medical imaging , the techniques and issues discussed

  7. Analyses of sexual dimorphism of contemporary Japanese using reconstructed three-dimensional CT images--curvature of the best-fit circle of the greater sciatic notch.

    PubMed

    Biwasaka, Hitoshi; Aoki, Yasuhiro; Tanijiri, Toyohisa; Sato, Kei; Fujita, Sachiko; Yoshioka, Kunihiro; Tomabechi, Makiko

    2009-04-01

    We examined various expression methods of sexual dimorphism of the greater sciatic notch (GSN) of the pelvis in contemporary Japanese residents by analyzing the three-dimensional (3D) images reconstructed by multi-slice computed tomography (CT) data, using image-processing and measurement software. Mean error of anthropological measurement values between two skeletonized pelves and their reconstructed 3D-CT images was 1.4%. A spline curve was set along the edge of the GSN of reconstructed pelvic 3D-CT images. Then a best-fit circle for subsets of the spline curve, 5-60mm in length and passing through the deepest point (inflection point) of the GSN, was created, and the radius of the circle (curvature radius) and its ratio to the maximum pelvic height (curvature quotient) were computed. In analysis of images reconstructed from CT data of 180 individuals (male: 91, female: 89), sexes were correctly identified in with 89.4% of specimens, with a spline curve length of 60mm. Because sexing was possible even in deeper regions of the GSN, which are relatively resistant to postmortem damage, the present method may be useful for practical forensic investigation.

  8. Eliminating "Hotspots" in Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  9. Eliminating "Hotspots" in Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  10. Halftoning and Image Processing Algorithms

    DTIC Science & Technology

    1999-02-01

    screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our

  11. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  12. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  13. Radon transform based automatic metal artefacts generation for 3D threat image projection

    NASA Astrophysics Data System (ADS)

    Megherbi, Najla; Breckon, Toby P.; Flitton, Greg T.; Mouton, Andre

    2013-10-01

    Threat Image Projection (TIP) plays an important role in aviation security. In order to evaluate human security screeners in determining threats, TIP systems project images of realistic threat items into the images of the passenger baggage being scanned. In this proof of concept paper, we propose a 3D TIP method which can be integrated within new 3D Computed Tomography (CT) screening systems. In order to make the threat items appear as if they were genuinely located in the scanned bag, appropriate CT metal artefacts are generated in the resulting TIP images according to the scan orientation, the passenger bag content and the material of the inserted threat items. This process is performed in the projection domain using a novel methodology based on the Radon Transform. The obtained results using challenging 3D CT baggage images are very promising in terms of plausibility and realism.

  14. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  15. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  16. Image processing for medical diagnosis using CNN

    NASA Astrophysics Data System (ADS)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images.

  17. Image Processing Language. Phase 1

    DTIC Science & Technology

    1988-05-01

    A standard- ized, mathematically rigorous, efficient algebraic system designed specifically for image manipulation does not exist. This report, Image...elements. The imaging algebra structure should support object oriented design . This requires it to be programmably transportable. It I should be an easy...tend to be quite simple. In general, the end product of mathematical reasoning can be elaborate and difficult for the non-e:pert to ś penetrate

  18. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  19. Coordination in serial-parallel image processing

    NASA Astrophysics Data System (ADS)

    Wójcik, Waldemar; Dubovoi, Vladymyr M.; Duda, Marina E.; Romaniuk, Ryszard S.; Yesmakhanova, Laura; Kozbakova, Ainur

    2015-12-01

    Serial-parallel systems used to convert the image. The control of their work results with the need to solve coordination problem. The paper summarizes the model of coordination of resource allocation in relation to the task of synchronizing parallel processes; the genetic algorithm of coordination developed, its adequacy verified in relation to the process of parallel image processing.

  20. NASA Regional Planetary Image Facility image retrieval and processing system

    NASA Technical Reports Server (NTRS)

    Slavney, Susan

    1986-01-01

    The general design and analysis functions of the NASA Regional Planetary Image Facility (RPIF) image workstation prototype are described. The main functions of the MicroVAX II based workstation will be database searching, digital image retrieval, and image processing and display. The uses of the Transportable Applications Executive (TAE) in the system are described. File access and image processing programs use TAE tutor screens to receive parameters from the user and TAE subroutines are used to pass parameters to applications programs. Interface menus are also provided by TAE.

  1. Image processing on the IBM personal computer

    NASA Technical Reports Server (NTRS)

    Myers, H. J.; Bernstein, R.

    1985-01-01

    An experimental, personal computer image processing system has been developed which provides a variety of processing functions in an environment that connects programs by means of a 'menu' for both casual and experienced users. The system is implemented by a compiled BASIC program that is coupled to assembly language subroutines. Image processing functions encompass subimage extraction, image coloring, area classification, histogramming, contrast enhancement, filtering, and pixel extraction.

  2. Semi-automated Image Processing for Preclinical Bioluminescent Imaging.

    PubMed

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  3. Semi-automated Image Processing for Preclinical Bioluminescent Imaging

    PubMed Central

    Slavine, Nikolai V; McColl, Roderick W

    2015-01-01

    Objective Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. Methods In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. Results We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. Conclusion The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment. PMID:26618187

  4. Metric Aspects of Digital Images and Digital Image Processing.

    DTIC Science & Technology

    1984-09-01

    image files were synthesized aerial images, produced using the program SIM. This program makes use of a digital terrain model containing gray shade...the Arizona test data. This test data was derived from a digitized stereo model formed by two nearly vertical images taken in October 1066 near... digital image processing operations will be investigated in a manner similar to compression. 7) It is hoped that the ability to quantitatively assess

  5. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  6. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  7. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  8. Image Processing in Medical Microscopy

    PubMed Central

    Preston, Kendall

    1986-01-01

    Full automation in medical microscopy has been accomplished in the field of clinical determination of the white blood cell differential count. Manufacture of differential counting microscopes commenced in 1974, and approximately 1,000 of these robots are now in the field. They analyze images of human white blood cells, red blood cells, and platelets at the global rate of approximately 100,000 slides per day. This incredible throughout represents automated image analysis and pattern recognition at the rate of 5 billion images per year and represents a major accomplishments in the application of machine vision in medicine. In other areas, such as cytology and cytogenetics, automated computer vision is still in the research phase. This paper discusses the state of the art in blood smear analysis automation and in other related areas including multi-resolution microscopy where images are currently being generated over a 64:1 magnification containing from one-quarter megapixel to one gigapixel in full color.

  9. Image processing utilizing an APL interface

    NASA Astrophysics Data System (ADS)

    Zmola, Carl; Kapp, Oscar H.

    1991-03-01

    The past few years have seen the growing use of digital techniques in the analysis of electron microscope image data. This trend is driven by the need to maximize the information extracted from the electron micrograph by submitting its digital representation to the broad spectrum of analytical techniques made available by the digital computer. We are developing an image processing system for the analysis of digital images obtained with a scanning transmission electron microscope (STEM) and a scanning electron microscope (SEM). This system, run on an IBM PS/2 model 70/A21, uses menu-based image processing and an interactive APL interface which permits the direct manipulation of image data.

  10. Nuclear imaging of molecular processes in cancer.

    PubMed

    Torres Martin de Rosales, Rafael; Arstad, Erik; Blower, Philip J

    2009-09-01

    Molecular imaging using radionuclides has brought about the possibility to image a wide range of molecular processes using radiotracers injected into the body at very low concentrations that should not perturb the processes being studied. Examples include specific peptide receptor expression, angiogenesis, multi drug resistance, hypoxia, glucose metabolism, and many others. This article presents an overview, aimed at the non-specialist in imaging, of the radionuclide imaging technologies positron emission tomography and single photon radionuclide imaging, and some of the molecules labeled with gamma- and positron-emitting radioisotopes that have been, or are being, developed for research and clinical applications in cancer.

  11. Combining advanced imaging processing and low cost remote imaging capabilities

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew J.; McQuiddy, Brian

    2008-04-01

    Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

  12. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  13. Automated segmentation of knee and ankle regions of rats from CT images to quantify bone mineral density for monitoring treatments of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Cruz, Francisco; Sevilla, Raquel; Zhu, Joe; Vanko, Amy; Lee, Jung Hoon; Dogdas, Belma; Zhang, Weisheng

    2014-03-01

    Bone mineral density (BMD) obtained from a CT image is an imaging biomarker used pre-clinically for characterizing the Rheumatoid arthritis (RA) phenotype. We use this biomarker in animal studies for evaluating disease progression and for testing various compounds. In the current setting, BMD measurements are obtained manually by selecting the regions of interest from three-dimensional (3-D) CT images of rat legs, which results in a laborious and low-throughput process. Combining image processing techniques, such as intensity thresholding and skeletonization, with mathematical techniques in curve fitting and curvature calculations, we developed an algorithm for quick, consistent, and automatic detection of joints in large CT data sets. The implemented algorithm has reduced analysis time for a study with 200 CT images from 10 days to 3 days and has improved the robust detection of the obtained regions of interest compared with manual segmentation. This algorithm has been used successfully in over 40 studies.

  14. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  15. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  16. Image data processing in the '90s

    NASA Astrophysics Data System (ADS)

    Labudda, Hans-Juergen; Kappel, Helmut

    The new Meteosat proposed for Europe is described. The main characteristics of the new Meteosat are: (1) the image of the earth will be subdivided into 100 million pixels; (2) each image is implemented in seven spectral channels; (3) microwave measuring equipment will probe the atmosphere in different altitude layers; and (4) images will be produced every 15 minutes. The use of distributed intelligence instead of central data processing as the main structure and operating principle for the image and data processing system is examined. The linking of image processing procedures to satellite-based-image-forming equipment is being studied. The computer structure on the ground for the new Meteosat will based on parallel processing. The application of expert systems to the satellite network is discussed.

  17. How Digital Image Processing Became Really Easy

    NASA Astrophysics Data System (ADS)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  18. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  19. Real-time video image processing

    NASA Astrophysics Data System (ADS)

    Smedley, Kirk G.; Yool, Stephen R.

    1990-11-01

    Lockheed has designed and implemented a prototype real-time Video Enhancement Workbench (VEW) using commercial offtheshelf hardware and custom software. The hardware components include a Sun workstation Aspex PIPE image processor time base corrector VCR video camera and realtime disk subsystem. A cornprehensive set of image processing functions can be invoked by the analyst at any time during processing enabling interactive enhancement and exploitation of video sequences. Processed images can be transmitted and stored within the system in digital or video form. VEW also provides image output to a laser printer and to Interleaf technical publishing software.

  20. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  1. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  2. [The model of adaptive primary image processing].

    PubMed

    Dudkin, K N; Mironov, S V; Dudkin, A K; Chikhman, V N

    1998-07-01

    A computer model of adaptive segmentation of the 2D visual objects was developed. Primary image descriptions are realised via spatial frequency filters and feature detectors performing as self-organised mechanisms. Simulation of the control processes related to attention, lateral, frequency-selective and cross-orientation inhibition, determines the adaptive image processing.

  3. Water surface capturing by image processing

    USDA-ARS?s Scientific Manuscript database

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  4. Digital image processing in cephalometric analysis.

    PubMed

    Jäger, A; Döler, W; Schormann, T

    1989-01-01

    Digital image processing methods were applied to improve the practicability of cephalometric analysis. The individual X-ray film was digitized by the aid of a high resolution microscope-photometer. Digital processing was done using a VAX 8600 computer system. An improvement of the image quality was achieved by means of various digital enhancement and filtering techniques.

  5. Digital radiography image quality: image processing and display.

    PubMed

    Krupinski, Elizabeth A; Williams, Mark B; Andriole, Katherine; Strauss, Keith J; Applegate, Kimberly; Wyatt, Margaret; Bjork, Sandra; Seibert, J Anthony

    2007-06-01

    This article on digital radiography image processing and display is the second of two articles written as part of an intersociety effort to establish image quality standards for digital and computed radiography. The topic of the other paper is digital radiography image acquisition. The articles were developed collaboratively by the ACR, the American Association of Physicists in Medicine, and the Society for Imaging Informatics in Medicine. Increasingly, medical imaging and patient information are being managed using digital data during acquisition, transmission, storage, display, interpretation, and consultation. The management of data during each of these operations may have an impact on the quality of patient care. These articles describe what is known to improve image quality for digital and computed radiography and to make recommendations on optimal acquisition, processing, and display. The practice of digital radiography is a rapidly evolving technology that will require timely revision of any guidelines and standards.

  6. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection.

  7. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  8. Image processing of digital chest ionograms.

    PubMed

    Yarwood, J R; Moores, B M

    1988-10-01

    A number of image-processing techniques have been applied to a digital ionographic chest image in order to evaluate their possible effects on this type of image. In order to quantify any effect, a simulated lesion was superimposed on the image at a variety of locations representing different types of structural detail. Visualization of these lesions was evaluated by a number of observers both pre- and post-processing operations. The operations employed included grey-scale transformations, histogram operations, edge-enhancement and smoothing functions. The resulting effects of these operations on the visualization of the simulated lesions are discussed.

  9. On some applications of diffusion processes for image processing

    NASA Astrophysics Data System (ADS)

    Morfu, S.

    2009-06-01

    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  10. Applications of Digital Image Processing 11

    NASA Technical Reports Server (NTRS)

    Cho, Y. -C.

    1988-01-01

    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  11. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  12. On Processing Hexagonally Sampled Images

    DTIC Science & Technology

    2011-07-01

    two points p1 = (a1,r1, c1 ) and p2 = (a2,r2,c2):       2 21 21 2 21 21 21 2 3 2...rr aa cc aa d pp “City-Block” distance (on the image plane) between two points p1 = (a1,r1, c1 ) and p2 = (a2,r2,c2...A. Approved for public release, distribution unlimited. (96ABW-2011-0325) Neuromorphic Infrared Sensor ( NIFS ) 31 DISTRIBUTION A. Approved

  13. Image processing technology for enhanced situational awareness

    NASA Astrophysics Data System (ADS)

    Page, S. F.; Smith, M. I.; Hickman, D.

    2009-09-01

    This paper discusses the integration of a number of advanced image and data processing technologies in support of the development of next-generation Situational Awareness systems for counter-terrorism and crime fighting applications. In particular, the paper discusses the European Union Framework 7 'SAMURAI' project, which is investigating novel approaches to interactive Situational Awareness using cooperative networks of heterogeneous imaging sensors. Specific focus is given to novel Data Fusion aspects of the research which aim to improve system performance through intelligently fusing both image data and non image data sources, resolving human-machine conflicts, and refining the Situational Awareness picture. In addition, the paper highlights some recent advances in supporting image processing technologies. Finally, future trends in image-based Situational Awareness are identified, such as Post-Event Analysis (also known as 'Back-Tracking'), and the associated technical challenges are discussed.

  14. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  15. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  16. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  17. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  18. Energy preserving QMF for image processing.

    PubMed

    Lian, Jian-ao; Wang, Yonghui

    2014-07-01

    Implementation of new biorthogonal filter banks (BFB) for image compression and denoising is performed, using test images with diversified characteristics. These new BFB’s are linear-phase, have odd lengths, and with a critical feature, namely, the filters preserve signal energy very well. Experimental results show that the proposed filter banks demonstrate promising performance improvement over the filter banks of those widely used in the image processing area, such as the CDF 9/7.

  19. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  20. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  1. Mapping spatial patterns with morphological image processing

    Treesearch

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  2. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  3. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  4. Image processing and pattern recognition in textiles

    NASA Astrophysics Data System (ADS)

    Kong, Lingxue; She, F. H.

    2001-09-01

    Image processing and pattern recognition have been successfully applied in many textile related areas. For example, they have been used in defect detection of cotton fibers and various fabrics. In this work, the application of image processing into animal fiber classification is discussed. Integrated into/with artificial neural networks, the image processing technique has provided a useful tool to solve complex problems in textile technology. Three different approaches are used in this work for fiber classification and pattern recognition: feature extraction with image process, pattern recognition and classification with artificial neural networks, and feature recognition and classification with artificial neural network. All of them yields satisfactory results by giving a high level of accuracy in classification.

  5. Checking Fits With Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Davis, R. M.; Geaslen, W. D.

    1988-01-01

    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  6. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  7. An Image Handling System For Medical Image Processing

    NASA Astrophysics Data System (ADS)

    Aubry, Florent; Kaplan, Herve; Di Paola, Robert

    1989-10-01

    The processing of medical images requires the handling of complex structured sets of elementary objects (images, curves,... and their associated parameters). Usually, an elementary object cannot be interpreted without information concerning the structure to which it belongs (e.g image sequences). Then it is necessary to consider the whole structure like an atomic semantic entity, object of an image data base. As specific tools are necessary to manage these objects, an object oriented handling system (OHS), part of our medical image data base project (BDIM), was developed to perform : i) the array storage management, ii) the interface between applications and the BDIM to have access to objects (create, update, delete...) and components (navigation inside object structures, access to arrays and parameters). The image handling system (IHS) decribed here is the user level part of the OHS. IHS allows the evolution of the data base environment by adding or updating acquisition and/or processing functionalities. To unify data access methods, the concept of logical file is introduced as a special class of BDIM objects. The logical file does not necessitate the use of a specific declaration for the different kinds of images because it is possible, for a desired processing , to have access to the only concerned data.

  8. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  9. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  10. Image processing for HTS SQUID probe microscope

    NASA Astrophysics Data System (ADS)

    Hayashi, T.; Koetitz, R.; Itozaki, H.; Ishikawa, T.; Kawabe, U.

    2005-10-01

    An HTS SQUID probe microscope has been developed using a high-permeability needle to enable high spatial resolution measurement of samples in air even at room temperature. Image processing techniques have also been developed to improve the magnetic field images obtained from the microscope. Artifacts in the data occur due to electromagnetic interference from electric power lines, line drift and flux trapping. The electromagnetic interference could successfully be removed by eliminating the noise peaks from the power spectrum of fast Fourier transforms of line scans of the image. The drift between lines was removed by interpolating the mean field value of each scan line. Artifacts in line scans occurring due to flux trapping or unexpected noise were removed by the detection of a sharp drift and interpolation using the line data of neighboring lines. Highly detailed magnetic field images were obtained from the HTS SQUID probe microscope by the application of these image processing techniques.

  11. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  12. Image processing via ultrasonics - Status and promise

    NASA Technical Reports Server (NTRS)

    Kornreich, P. G.; Kowel, S. T.; Mahapatra, A.; Nouhi, A.

    1979-01-01

    Acousto-electric devices for electronic imaging of light are discussed. These devices are more versatile than line scan imaging devices in current use. They have the capability of presenting the image information in a variety of modes. The image can be read out in the conventional line scan mode. It can be read out in the form of the Fourier, Hadamard, or other transform. One can take the transform along one direction of the image and line scan in the other direction, or perform other combinations of image processing functions. This is accomplished by applying the appropriate electrical input signals to the device. Since the electrical output signal of these devices can be detected in a synchronous mode, substantial noise reduction is possible

  13. Image acquisition and image processing for the intraocular vision aid.

    PubMed

    Krisch, I; Hijazi, N; Hosticka, B J

    2002-01-01

    The contribution describes an "intraocular vision aid (IOVA)" system for patients suffering from corneal opacification. In order to gain patients' acceptance the system has to be miniaturized to a magnitude that image acquisition, image processing, and power supply can be integrated into a portable unit. A CMOS camera whose dynamic range covers more than 100 dB takes pictures of the scenery. Its image sensor has a resolution of 380 x 300 pixel. In order to reduce fixed pattern noise correlated double sampling is implemented on-chip. In addition, this sensor stands out for low power consumption, random pixel access, and local brightness adaptation. An analog-digital-converter allows direct coupling to an external signal processor or a monolithically integrated unit for image processing to compress data.

  14. Uncooled MEMS IR imagers with optical readout and image processing

    NASA Astrophysics Data System (ADS)

    Lavrik, Nickolay; Archibald, Rick; Grbovic, Dragoslav; Rajic, Slo; Datskos, Panos

    2007-04-01

    MEMS thermal transducers offer a promising technological platform for uncooled IR imaging. We report on the fabrication and performance of a 256x256 MEMS IR FPA based on bimaterial microcantilever. The FPA readout is performed using a simple and efficient optical readout scheme. The response time of the bimaterial microcantilever was <15 ms and the thermal isolation was calculated to be < 4x10 -7 W/K. Using these FPAs we obtained IR images of room temperature objects. Image quality is improved by automatic post-processing of artifacts arising from noise and non-responsive pixels. An iterative Curvelet denoising and inpainting procedure is successfully applied to image output. We present our results and discuss the factors that determine the ultimate performance of the FPA. One of the unique advantages of the present approach is the scalability to larger imaging arrays.

  15. Image Science with Photon-Processing Detectors

    PubMed Central

    Caucci, Luca; Jha, Abhinav K.; Furenlid, Lars R.; Clarkson, Eric W.; Kupinski, Matthew A.; Barrett, Harrison H.

    2015-01-01

    We introduce and discuss photon-processing detectors and we compare them with photon-counting detectors. By estimating a relatively small number of attributes for each collected photon, photon-processing detectors may help understand and solve a fundamental theoretical problem of any imaging system based on photon-counting detectors, namely null functions. We argue that photon-processing detectors can improve task performance by estimating position, energy, and time of arrival for each collected photon. We consider a continuous-to-continuous linear operator to relate the object being imaged to the collected data, and discuss how this operator can be analyzed to derive properties of the imaging system. Finally, we derive an expression for the characteristic functional of an imaging system that produces list-mode data. PMID:26347396

  16. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  17. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  18. Real-time optical image processing techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hua-Kuang

    1988-10-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  19. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  20. Palm print image processing with PCNN

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhao, Xianhong

    2010-08-01

    Pulse coupled neural networks (PCNN) is based on Eckhorn's model of cat visual cortex, and imitate mammals visual processing, and palm print has been found as a personal biological feature for a long history. This inspired us with the combination of them: a novel method for palm print processing is proposed, which includes pre-processing and feature extraction of palm print image using PCNN; then the feature of palm print image is used for identifying. Our experiment shows that a verification rate of 87.5% can be achieved at ideal condition. We also find that the verification rate decreases duo to rotate or shift of palm.

  1. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  2. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  3. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  4. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    NASA Astrophysics Data System (ADS)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  5. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  6. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  7. Image Processing and the Performance Gap

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Loew, Murray H.

    Automated image processing and analysis methods have brought new dimensions, literally and figuratively, to medical imaging. A large array of tools for visualization, quantization, classification, and decision-making are available to aid clinicians at all junctures: in real-time diagnosis and therapy, in planning, and in retrospective meta-analyses. Many of those tools, however, are not in regular use by radiologists. This chapter briefly discusses the advances in image acquisition and processing that have been made over the past 30 years and identifies gaps: opportunities offered by new methods, algorithms, and hardware that have not been accepted by (or, in some cases, made available to) radiologists. We associate the gaps with (a) the radiologists (a taxonomy is provided), (b) the methods (sometimes unintuitive or incomplete), and (c) the imaging industry (providing generalized, rather than optimized, solutions).

  8. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  9. Employing image processing techniques for cancer detection using microarray images.

    PubMed

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A brief review of digital image processing

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1975-01-01

    The review is presented with particular reference to Skylab S-192 and Landsat MSS imagery. Attention is given to rectification (calibration) processing with emphasis on geometric correction of image distortions. Image enhancement techniques (e.g., the use of high pass digital filters to eliminate gross shading to allow emphasis of the fine detail) are described along with data analysis and system considerations (software philosophy).

  11. Radiographic image processing for industrial applications

    NASA Astrophysics Data System (ADS)

    Dowling, Martin J.; Kinsella, Timothy E.; Bartels, Keith A.; Light, Glenn M.

    1998-03-01

    One advantage of working with digital images is the opportunity for enhancement. While it is important to preserve the original image, variations can be generated that yield greater understanding of object properties. It is often possible to effectively increase dynamic range, improve contrast in regions of interest, emphasize subtle features, reduce background noise, and provide more robust detection of faults. This paper describes and illustrates some of these processes using real world examples.

  12. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  13. PCB Fault Detection Using Image Processing

    NASA Astrophysics Data System (ADS)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  14. Retinex processing for automatic image enhancement

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2002-06-01

    In the last published concept (1986) for a Retinex computation, Edwin Land introduced a center/surround spatial form, which was inspired by the receptive field structures of neurophysiology. With this as our starting point we have over the years developed this concept into a full scale automatic image enhancement algorithm - the Multi-Scale Retinex with Color Restoration (MSRCR) which combines color constancy with local contrast/lightness enhancement to transform digital images into renditions that approach the realism of direct scene observation. The MSRCR algorithm has proven to be quite general purpose, and very resilient to common forms of image pre-processing such as reasonable ranges of gamma and contrast stretch transformations. More recently we have been exploring the fundamental scientific implications of this form of image processing, namely: (i) the visual inadequacy of the linear representation of digital images, (ii) the existence of a canonical or statistical ideal visual image, and (iii) new measures of visual quality based upon these insights derived from our extensive experience with MSRCR enhanced images. The lattermost serves as the basis for future schemes for automating visual assessment - a primitive first step in bringing visual intelligence to computers.

  15. System identification by video image processing

    NASA Astrophysics Data System (ADS)

    Shinozuka, Masanobu; Chung, Hung-Chi; Ichitsubo, Makoto; Liang, Jianwen

    2001-07-01

    Emerging image processing techniques demonstrate their potential applications in earthquake engineering, particularly in the area of system identification. In this respect, the objectives of this research are to demonstrate the underlying principle that permits system identification, non-intrusively and remotely, with the aid of video camera and, for the purpose of the proof-of-concept, to apply the principle to a system identification problem involving relative motion, on the basis of the images. In structural control, accelerations at different stories of a building are usually measured and fed back for processing and control. As an alternative, this study attempts to identify the relative motion between different stories of a building for the purpose of on-line structural control by digitizing the images taken by video camera. For this purpose, the video image of the vibration of a structure base-isolated by a friction device under shaking-table was used successfully to observe relative displacement between the isolated structure and the shaking-table. This proof-of-concept experiment demonstrates that the proposed identification method based on digital image processing can be used with appropriate modifications to identify many other engineering-wise significant quantities remotely. In addition to the system identification study in the structural dynamics mentioned above, a result of preliminary study is described involving the video imaging of state of crack damage of road and highway pavement.

  16. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  17. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  18. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  19. Image processing applications for geologic mapping

    SciTech Connect

    Abrams, M.; Blusson, A.; Carrere, V.; Nguyen, T.; Rabu, Y.

    1985-03-01

    The use of satellite data, particularly Landsat images, for geologic mapping provides the geologist with a powerful tool. The digital format of these data permits applications of image processing to extract or enhance information useful for mapping purposes. Examples are presented of lithologic classification using texture measures, automatic lineament detection and structural analysis, and use of registered multisource satellite data. In each case, the additional mapping information provided relative to the particular treatment is evaluated. The goal is to provide the geologist with a range of processing techniques adapted to specific mapping problems.

  20. Results of precision processing (scene correction) of ERTS-1 images using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Bernstein, R.

    1973-01-01

    ERTS-1 MSS and RBV data recorded on computer compatible tapes have been analyzed and processed, and preliminary results have been obtained. No degradation of intensity (radiance) information occurred in implementing the geometric correction. The quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum. Processing times of digitally processed images are about equivalent to the NDPF electro-optical processor.

  1. FLIPS: Friendly Lisp Image Processing System

    NASA Astrophysics Data System (ADS)

    Gee, Shirley J.

    1991-08-01

    The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.

  2. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  3. [Image processing of early gastric cancer cases].

    PubMed

    Inamoto, K; Umeda, T; Inamura, K

    1992-11-25

    Computer image processing was used to enhance gastric lesions in order to improve the detection of stomach cancer. Digitization was performed in 25 cases of early gastric cancer that had been confirmed surgically and pathologically. The image processing consisted of grey scale transformation, edge enhancement (Sobel operator), and high-pass filtering (unsharp masking). Gery scale transformation improved image quality for the detection of gastric lesions. The Sobel operator enhanced linear and curved margins, and consequently, suppressed the rest. High-pass filtering with unsharp masking was superior to visualization of the texture pattern on the mucosa. Eight of 10 small lesions (less than 2.0 cm) were successfully demonstrated. However, the detection of two lesions in the antrum, was difficult even with the aid of image enhancement. In the other 15 lesions (more than 2.0 cm), the tumor surface pattern and margin between the tumor and non-pathological mucosa were clearly visualized. Image processing was considered to contribute to the detection of small early gastric cancer lesions by enhancing the pathological lesions.

  4. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-09-25

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  5. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  6. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  7. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-01-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  8. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms.

  9. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  10. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system.

  11. Image Processing for Galaxy Ellipticity Analysis

    NASA Astrophysics Data System (ADS)

    Stankus, Paul

    2015-04-01

    Shape analysis of statistically large samples of galaxy images can be used to reveal the imprint of weak gravitational lensing by dark matter distributions. As new, large-scale surveys expand the potential catalog, galaxy shape analysis suffers the (coupled) problems of high noise and uncertainty in the prior morphology. We investigate a new image processing technique to help mitigate these problems, in which repeated auto-correlations and auto-convolutions are employed to push the true shape toward a universal (Gaussian) attractor while relatively suppressing uncorrelated pixel noise. The goal is reliable reconstruction of original image moments, independent of image shape. First test evaluations of the technique on small control samples will be presented, and future applicability discussed. Supported by the US-DOE.

  12. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  13. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  14. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  15. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  16. Progressive band processing for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Schultz, Robert C.

    Hyperspectral imaging has emerged as an image processing technique in many applications. The reason that hyperspectral data is called hyperspectral is mainly because the massive amount of information provided by the hundreds of spectral bands that can be used for data analysis. However, due to very high band-to-band correlation much information may be also redundant. Consequently, how to effectively and best utilize such rich spectral information becomes very challenging. One general approach is data dimensionality reduction which can be performed by data compression techniques, such as data transforms, and data reduction techniques, such as band selection. This dissertation presents a new area in hyperspectral imaging, to be called progressive hyperspectral imaging, which has not been explored in the past. Specifically, it derives a new theory, called Progressive Band Processing (PBP) of hyperspectral data that can significantly reduce computing time and can also be realized in real-time. It is particularly suited for application areas such as hyperspectral data communications and transmission where data can be communicated and transmitted progressively through spectral or satellite channels with limited data storage. Most importantly, PBP allows users to screen preliminary results before deciding to continue with processing the complete data set. These advantages benefit users of hyperspectral data by reducing processing time and increasing the timeliness of crucial decisions made based on the data such as identifying key intelligence information when a required response time is short.

  17. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    Computer," Byte, 3: 14-25 (December 1978). McGraw-Hill, 1985 24. Trussell, H. Joel . "Processing of X-ray Images," Proceedings of the IEEE, 69: 615-627...Services Electronics Program contract N00014-79-C-0424 (AD-085-846). 107 Therrien , Charles W. et al. "A Multiprocessor System for Simulation of

  18. Selecting optimum algorithms for image processing

    NASA Technical Reports Server (NTRS)

    Jaroe, R. R.; Hodges, J.; Atkinson, R. E.; Gaggini, B.; Callas, L.; Peterson, J.

    1981-01-01

    Collection of registration, compression, and classification algorithms allows users to evaluate approaches and select best one for particular application. Program includes six registration algorithms, six compression algorithms, and two classification algorithms. Package also includes routines for evaluating effects of processing on image data. Collection is written in FORTRAN IV for batch execution.

  19. Hemispheric superiority for processing a mirror image.

    PubMed

    Garren, R B; Gehlsen, G M

    1981-04-01

    39 adult subjects were administered a test using tachistoscopic half-field presentations to determine hemispheric dominance and a mirror-tracing task to determine if an hemispheric superiority exists for processing a mirror-image. The results indicate superiority of the nondominant hemisphere for this task.

  20. Wavelet-aided pavement distress image processing

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2003-11-01

    A wavelet-based pavement distress detection and evaluation method is proposed. This method consists of two main parts, real-time processing for distress detection and offline processing for distress evaluation. The real-time processing part includes wavelet transform, distress detection and isolation, and image compression and noise reduction. When a pavement image is decomposed into different frequency subbands by wavelet transform, the distresses, which are usually irregular in shape, appear as high-amplitude wavelet coefficients in the high-frequency details subbands, while the background appears in the low-frequency approximation subband. Two statistical parameters, high-amplitude wavelet coefficient percentage (HAWCP) and high-frequency energy percentage (HFEP), are established and used as criteria for real-time distress detection and distress image isolation. For compression of isolated distress images, a modified EZW (Embedded Zerotrees of Wavelet coding) is developed, which can simultaneously compress the images and reduce the noise. The compressed data are saved to the hard drive for further analysis and evaluation. The offline processing includes distress classification, distress quantification, and reconstruction of the original image for distress segmentation, distress mapping, and maintenance decision-making. The compressed data are first loaded and decoded to obtain wavelet coefficients. Then Radon transform is then applied and the parameters related to the peaks in the Radon domain are used for distress classification. For distress quantification, a norm is defined that can be used as an index for evaluating the severity and extent of the distress. Compared to visual or manual inspection, the proposed method has the advantages of being objective, high-speed, safe, automated, and applicable to different types of pavements and distresses.

  1. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  2. Pattern Recognition and Image Processing of Infrared Astronomical Satellite Images

    NASA Astrophysics Data System (ADS)

    He, Lun Xiong

    1996-01-01

    The Infrared Astronomical Satellite (IRAS) images with wavelengths of 60 mu m and 100 mu m contain mainly information on both extra-galactic sources and low-temperature interstellar media. The low-temperature interstellar media in the Milky Way impose a "cirrus" screen of IRAS images, especially in images with 100 mu m wavelength. This dissertation deals with the techniques of removing the "cirrus" clouds from the 100 mu m band in order to achieve accurate determinations of point sources and their intensities (fluxes). We employ an image filtering process which utilizes mathematical morphology and wavelet analysis as the key tools in removing the "cirrus" foreground emission. The filtering process consists of extraction and classification of the size information, and then using the classification results in removal of the cirrus component from each pixel of the image. Extraction of size information is the most important step in this process. It is achieved by either mathematical morphology or wavelet analysis. In the mathematical morphological method, extraction of size information is done using the "sieving" process. In the wavelet method, multi-resolution techniques are employed instead. The classification of size information distinguishes extra-galactic sources from cirrus using their averaged size information. The cirrus component for each pixel is then removed by using the averaged cirrus size information. The filtered image contains much less cirrus. Intensity alteration for extra-galactic sources in the filtered image are discussed. It is possible to retain the fluxes of the point sources when we weigh the cirrus component differently pixel by pixel. The importance of the uni-directional size information extractions are addressed in this dissertation. Such uni-directional extractions are achieved by constraining the structuring elements, or by constraining the sieving process to be sequential. The generalizations of mathematical morphology operations based

  3. Image processing techniques for passive millimeter-wave imaging

    NASA Astrophysics Data System (ADS)

    Lettington, Alan H.; Gleed, David G.

    1998-08-01

    We present our results on the application of image processing techniques for passive millimeter-wave imaging and discuss possible future trends. Passive millimeter-wave imaging is useful in poor weather such as in fog and cloud. Its spatial resolution, however, can be restricted due to the diffraction limit of the front aperture. Its resolution may be increased using super-resolution techniques but often at the expense of processing time. Linear methods may be implemented in real time but non-linear methods which are required to restore missing spatial frequencies are usually more time consuming. In the present paper we describe fast super-resolution techniques which are potentially capable of being applied in real time. Associated issues such as reducing the influence of noise and improving recognition capability will be discussed. Various techniques have been used to enhance passive millimeter wave images giving excellent results and providing a significant quantifiable increase in spatial resolution. Examples of applying these techniques to imagery will be given.

  4. Visual parameter optimisation for biomedical image processing.

    PubMed

    Pretorius, A J; Zhou, Y; Ruddle, R A

    2015-01-01

    Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches.

  5. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  6. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  7. UV image processing to detect diffuse clouds

    NASA Astrophysics Data System (ADS)

    Armengot, M.; Gómez de Castro, A. I.; López-Santiago, J.; Sánchez-Doreste, N.

    2015-05-01

    The presence of diffuse clouds along the Galaxy is under consideration as far as they are related to stellar formation and their physical properties are not well understood. The signal received from most of these structures in the UV images is minimal compared to the point sources. The presence of noise in these images makes hard the analysis because the Signal-to-Noise ratio is proportionally much higher in these areas. However, the digital processing of the images shows that it is possible to enhance and target these clouds. Typically, this kind of treatment is done on purpose for specific research areas and the Astrophysicist's work depends on the computer tools and its possibilities for enhancing a particular area based on a prior knowledge. Automating this step is the goal of our work to make easier the study of these structures in UV images. In particular we have used the GALEX survey images in the aim of learning to automatically detect such clouds and be able of unsupervised detection and graphic enhancement to log them. Our experiments show the existence of some evidences in the UV images that allow the systematic computing and open the chance to generalize the algorithm to find these structures in universe areas where they have not been recorded yet.

  8. MRI Image Processing Based on Fractal Analysis

    PubMed

    Marusina, Mariya Y; Mochalina, Alexandra P; Frolova, Ekaterina P; Satikov, Valentin I; Barchuk, Anton A; Kuznetcov, Vladimir I; Gaidukov, Vadim S; Tarakanov, Segrey A

    2017-01-01

    Background: Cancer is one of the most common causes of human mortality, with about 14 million new cases and 8.2 million deaths reported in in 2012. Early diagnosis of cancer through screening allows interventions to reduce mortality. Fractal analysis of medical images may be useful for this purpose. Materials and Methods: In this study, we examined magnetic resonance (MR) images of healthy livers and livers containing metastases from colorectal cancer. The fractal dimension and the Hurst exponent were chosen as diagnostic features for tomographic imaging using Image J software package for image processings FracLac for applied for fractal analysis with a 120x150 pixel area. Calculations of the fractal dimensions of pathological and healthy tissue samples were performed using the box-counting method. Results: In pathological cases (foci formation), the Hurst exponent was less than 0.5 (the region of unstable statistical characteristics). For healthy tissue, the Hurst index is greater than 0.5 (the zone of stable characteristics). Conclusions: The study indicated the possibility of employing fractal rapid analysis for the detection of focal lesions of the liver. The Hurst exponent can be used as an important diagnostic characteristic for analysis of medical images.

  9. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  10. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  11. Image processing via VLSI: A concept paper

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1982-01-01

    Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.

  12. Gaia astrometric instrument calibration and image processing

    NASA Astrophysics Data System (ADS)

    Castaneda, J.; Fabricius, C.; Portell, J.; Garralda, N.; González-Vidal, J. J.; Clotet, M.; Torra, J.

    2017-03-01

    The astrometric instrument calibration and image processing is an integral and critical part of the Gaia mission. The data processing starts with a preliminary treatment on daily basis of the most recent data received and continues with the execution of several processing chains included in a cyclic reduction system. The cyclic processing chains are reprocessing all the accumulated data again in each iteration, thus adding the latest measurements and recomputing the outputs to obtain better quality on their results. This cyclic processing lasts until the convergence of the results is achieved and the catalogue is consolidated and published periodically. In this paper we describe the core of the data processing which has made possible the first catalogue release from the Gaia mission.

  13. Adult with sacral lipomyelomeningocele covered by an anomalous bone articulated with iliac bone: computed tomography and magnetic resonance images.

    PubMed

    Lee, Seung Hwa; Je, Bo-Kyung; Kim, Sung-Bum; Kim, Baek Hyun

    2012-06-01

    The present paper reports and discusses a case of sacral lipomyelomeningocele with an anomalous long bone articulating with the left iliac bone in a 40-year-old female. That patient had a monozygotic twin sister who had normal spine. The findings were incidental during an evaluation for a urinary tract infection. The computed tomography (CT) and magnetic resonance (MR) images revealed sacral dysraphism, lipomyelomeningocele, tethered spinal cord, and profound subcutaneous fat in the sacrococcygeal region. In addition, an anomalous bony strut was demonstrated on the posterior aspect of the sacrum, covering the sacral defect and the associated lipomyelomeningocele. The 3-D CT images of the anomalous bone associated with the sacral lipomyelomeningocele and the putative embryologic process are presented with a review of the literature.

  14. Architecture for processing image algebra operations

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick C.

    1992-06-01

    The proposed architecture is a logical design specifically for image algebra and other matrix related operations. The design is a fine grain SIMD concept consisting of three tightly coupled components: a spatial configuration processor, a weighting processor (point-wise), and an accumulation processor (point-wise). The flow of data and image processing operations are directed by a control buffer and pipe lined to each of the three processing components. The low-level abstraction of the proposed computational system is founded on the mathematical principle of discrete convolution and its geometrical decomposition. This geometrical decomposition combined with array processing requires redefining specific algebraic operations and reorganizing their order of parsing in the abstract syntax. The logical data flow of such an abstraction leads to a division of operations, those defined by point-wise operations, the others in terms of spatial configuration. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. The potential utility of this architectural design lies in its ability to provide order statistic filtering and all the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental formulation in the algebra, thus allowing a wide range of applications.

  15. Neural image processing by dendritic networks.

    PubMed

    Cuntz, Hermann; Haag, Jürgen; Borst, Alexander

    2003-09-16

    Convolution is one of the most common operations in image processing. Based on experimental findings on motion-sensitive visual interneurons of the fly, we show by realistic compartmental modeling that a dendritic network can implement this operation. In a first step, dendritic electrical coupling between two cells spatially blurs the original motion input. The blurred motion image is then passed onto a third cell via inhibitory dendritic synapses resulting in a sharpening of the signal. This enhancement of motion contrast may be the central element of figure-ground discrimination based on relative motion in the fly.

  16. Advanced communications technologies for image processing

    NASA Technical Reports Server (NTRS)

    Likens, W. C.; Jones, H. W.; Shameson, L.

    1984-01-01

    It is essential for image analysts to have the capability to link to remote facilities as a means of accessing both data bases and high-speed processors. This can increase productivity through enhanced data access and minimization of delays. New technology is emerging to provide the high communication data rates needed in image processing. These developments include multi-user sharing of high bandwidth (60 megabits per second) Time Division Multiple Access (TDMA) satellite links, low-cost satellite ground stations, and high speed adaptive quadrature modems that allow 9600 bit per second communications over voice-grade telephone lines.

  17. Image processing with JPEG2000 coders

    NASA Astrophysics Data System (ADS)

    Śliwiński, Przemysław; Smutnicki, Czesław; Chorażyczewski, Artur

    2008-04-01

    In the note, several wavelet-based image processing algorithms are presented. Denoising algorithm is derived from the Donoho's thresholding. Rescaling algorithm reuses sub-division scheme of the Sweldens' lifting and a sensor linearization procedure exploiting system identification algorithms developed for nonlinear dynamic systems. Proposed autofocus algorithm is a passive one, works in wavelet domain and relies on properties of lens transfer function. The common advantage of the algorithms is that they can easily be implemented within the JPEG2000 image compression standard encoder, offering simplification of the final circuitry (or the software package) and the reduction of the power consumption (program size, respectively) when compared to solutions based on separate components.

  18. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  19. Translational motion compensation in ISAR image processing.

    PubMed

    Wu, H; Grenier, D; Delisle, G Y; Fang, D G

    1995-01-01

    In inverse synthetic aperture radar (ISAR) imaging, the target rotational motion with respect to the radar line of sight contributes to the imaging ability, whereas the translational motion must be compensated out. This paper presents a novel two-step approach to translational motion compensation using an adaptive range tracking method for range bin alignment and a recursive multiple-scatterer algorithm (RMSA) for signal phase compensation. The initial step of RMSA is equivalent to the dominant-scatterer algorithm (DSA). An error-compensating point source is then recursively synthesized from the selected range bins, where each contains a prominent scatterer. Since the clutter-induced phase errors are reduced by phase averaging, the image speckle noise can be reduced significantly. Experimental data processing for a commercial aircraft and computer simulations confirm the validity of the approach.

  20. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  1. IMAGE 100: The interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Schaller, E. S.; Towles, R. W.

    1975-01-01

    The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.

  2. Multidimensional energy operator for image processing

    NASA Astrophysics Data System (ADS)

    Maragos, Petros; Bovik, Alan C.; Quatieri, Thomas F.

    1992-11-01

    The 1-D nonlinear differential operator (Psi) (f) equals (f')2 - ff' has been recently introduced to signal processing and has been found very useful for estimating the parameters of sinusoids and the modulating signals of AM-FM signals. It is called an energy operator because it can track the energy of an oscillator source generating a sinusoidal signal. In this paper we introduce the multidimensional extension (Phi) (f) equals (parallel)DELf(parallel)2 - fDEL2f of the 1-D energy operator and briefly outline some of its applications to image processing. We discuss some interesting properties of the multidimensional operator and develop demodulation algorithms to estimate the amplitude envelope and instantaneous frequencies of 2-D spatially-varying AM-FM signals, which can model image texture. The attractive features of the multidimensional operator and the related amplitude/frequency demodulation algorithms are their simplicity, efficiency, and ability to track instantaneously- varying spatial modulation patterns.

  3. Sorting Olive Batches for the Milling Process Using Image Processing

    PubMed Central

    Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  4. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  5. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  6. Novel image processing approach to detect malaria

    NASA Astrophysics Data System (ADS)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  7. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  8. Laplacian forests: semantic image segmentation by guided bagging.

    PubMed

    Lombaert, Herve; Zikic, Darko; Criminisi, Antonio; Ayache, Nicholas

    2014-01-01

    This paper presents a new, efficient and accurate technique for the semantic segmentation of medical images. The paper builds upon the successful random decision forests model and improves on it by modifying the way in which randomness is injected into the tree training process. The contribution of this paper is two-fold. First, we replace the conventional bagging procedure (the uniform sampling of training images) with a guided bagging approach, which exploits the inherent structure and organization of the training image set. This allows the creation of decision trees that are specialized to a specific sub-type of images in the training set. Second, the segmentation of a previously unseen image happens via selection and application of only the trees that are relevant to the given test image. Tree selection is done automatically, via the learned image embedding, with more precisely a Laplacian eigenmap. We, therefore, call the proposed approach Laplacian Forests. We validate Laplacian Forests on a dataset of 256, manually segmented 3D CT scans of patients showing high variability in scanning protocols, resolution, body shape and anomalies. Compared with conventional decision forests, Laplacian Forests yield both higher training efficiency, due to the local analysis of the training image space, as well as higher segmentation accuracy, due to the specialization of the forest to image sub-types.

  9. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  10. ESO Image Processing Group: MIDAS Memo

    NASA Astrophysics Data System (ADS)

    1986-09-01

    The ESO scientific computer facilities were moved into new rooms in the extension of the ESO Headquarters in Garching. The machines in the old computer room were disconnected and moved to the new location in the basement of the new wing on July 16. Since there is no large elevator down to the room, all big items like computer racks, disk drives and tape units had to be taken out of the building and lowered down by a crane as seen in Figure 1. The new computer room contains now most of the scientific computer equipment in ESO including the two VAX 8600 computers, the IHAP HP system, the database machine 10M 500, and the peripherals like disk drives, terminal multiplexer and DICOMED image recording unit (see Figure 2). The ESO archive will also be placed in this room which is fully air-conditioned and fire protected by a halon system. The magnetic tape drives are located in an adjacent room with general access. The user room is now also located in the new wing on the entrance level. All IHAP and MIDAS image processing workstations are placed there in addition to a number of public available terminals connected to the VAX's. Figure 3 shows the half of the user room which is dedicated to MIDAS stations. Furthermore, the main printing and plotting facilities are in a central section of the user room. The Image Processing Group has also moved to new offices just above the user room.

  11. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  12. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  13. Development of the SOFIA Image Processing Tool

    NASA Technical Reports Server (NTRS)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  14. HYMOSS signal processing for pushbroom spectral imaging

    NASA Technical Reports Server (NTRS)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  15. Survey: interpolation methods in medical image processing.

    PubMed

    Lehmann, T M; Gönner, C; Spitzer, K

    1999-11-01

    Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic B-spline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1 x 1 up to 8 x 8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced and fundamental properties of interpolators are derived. Successful methods should be direct current (DC)-constant and interpolators rather than DC-inconstant or approximators. Each method's parameters are tuned with respect to those properties. This results in three novel kernels, which are introduced in this paper and proven to be within the best choices for medical image interpolation: the 6 x 6 Blackman-Harris windowed sinc interpolator, and the C2-continuous cubic kernels with N = 6 and N = 8 supporting points. For quantitative error evaluations, a set of 50 direct digital X rays was used. They have been selected arbitrarily from clinical routine. In general, large kernel sizes were found to be superior to small interpolation masks. Except for truncated sinc interpolators, all kernels with N = 6 or larger sizes perform significantly better than N = 2 or N = 3 point methods (p < 0.005). However, the differences within the group of large-sized kernels were not significant. Summarizing the results, the cubic 6 x 6 interpolator with continuous

  16. A New Image Processing and GIS Package

    NASA Technical Reports Server (NTRS)

    Rickman, D.; Luvall, J. C.; Cheng, T.

    1998-01-01

    The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

  17. A New Image Processing and GIS Package

    NASA Technical Reports Server (NTRS)

    Rickman, D.; Luvall, J. C.; Cheng, T.

    1998-01-01

    The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

  18. 4MOST metrology system image processing

    NASA Astrophysics Data System (ADS)

    Winkler, Roland; Barden, Samuel C.; Saviauk, Allar

    2016-08-01

    The 4-meter Multi-Object Spectroscopic Telescope (4MOST) instrument uses 2400 individually positioned optical fibres to couple the light of targets into its spectrographs. The metrology system determines the position of the back-illuminated fibres on the focal surface of the telescope. It consists of 4 identical cameras that are mounted on the spider vanes of the secondary mirror of the VISTA telescope and look through the entire optical train, including M1, M2 and the WFC/ADC unit. Here, we describe the image and data processing steps of the metrology system as well as present results from our 1 in 10 sized lab prototype.

  19. Image processing to optimize wave energy converters

    NASA Astrophysics Data System (ADS)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  20. Using Image Processing to Determine Emphysema Severity

    NASA Astrophysics Data System (ADS)

    McKenzie, Alexander; Sadun, Alberto

    2010-10-01

    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  1. Estimation of age by epidermal image processing.

    PubMed

    Tatsumi, S; Noda, H; Sugiyama, S

    1999-12-01

    Small pieces of human precordial skin were obtained from 266 individuals during autopsy performed in Osaka Prefecture. The area from the stratum corneum to the stratum basale in a unit area of the epidermal cross-section was extracted as a segmented area of white image by image processing. The number of pixels surrounding this area was measured in individuals of various ages, and the age-associated changes were evaluated. The number of pixels around this binary image in the epidermal cross-section showed a strong correlation with age. The number tended to decrease with an increase in age in individuals aged 20 years and above, which could be closely approximated by an exponential function. A formula for estimating age was obtained as an inverse function of the number of pixels and age, and the accuracy of estimation using this formula was examined by comparing the estimated age with the actual age. Such age-associated changes in the epidermis were considered to be closely related with increased roughening of the stratum basale, flattening of dermal papillae, and a decreased percentage of the stratum granulosum per unit area of epidermis observed by light microscopy or scanning electron microscopy.

  2. Automatic draft reading based on image processing

    NASA Astrophysics Data System (ADS)

    Tsujii, Takahiro; Yoshida, Hiromi; Iiguni, Youji

    2016-10-01

    In marine transportation, a draft survey is a means to determine the quantity of bulk cargo. Automatic draft reading based on computer image processing has been proposed. However, the conventional draft mark segmentation may fail when the video sequence has many other regions than draft marks and a hull, and the estimated waterline is inherently higher than the true one. To solve these problems, we propose an automatic draft reading method that uses morphological operations to detect draft marks and estimate the waterline for every frame with Canny edge detection and a robust estimation. Moreover, we emulate surveyors' draft reading process for getting the understanding of a shipper and a receiver. In an experiment in a towing tank, the draft reading error of the proposed method was <1 cm, showing the advantage of the proposed method. It is also shown that accurate draft reading has been achieved in a real-world scene.

  3. Platform for distributed image processing and image retrieval

    NASA Astrophysics Data System (ADS)

    Gueld, Mark O.; Thies, Christian J.; Fischer, Benedikt; Keysers, Daniel; Wein, Berthold B.; Lehmann, Thomas M.

    2003-06-01

    We describe a platform for the implementation of a system for content-based image retrieval in medical applications (IRMA). To cope with the constantly evolving medical knowledge, the platform offers a flexible feature model to store and uniformly access all feature types required within a multi-step retrieval approach. A structured generation history for each feature allows the automatic identification and re-use of already computed features. The platform uses directed acyclic graphs composed of processing steps and control elements to model arbitrary retrieval algorithms. This visually intuitive, data-flow oriented representation vastly improves the interdisciplinary communication between computer scientists and physicians during the development of new retrieval algorithms. The execution of the graphs is fully automated within the platform. Each processing step is modeled as a feature transformation. Due to a high degree of system transparency, both the implementation and the evaluation of retrieval algorithms are accelerated significantly. The platform uses a client-server architecture consisting of a central database, a central job scheduler, instances of a daemon service, and clients which embed user-implemented feature ansformations. Automatically distributed batch processing and distributed feature storage enable the cost-efficient use of an existing workstation cluster.

  4. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  5. MISR Browse Images: Cold Land Processes Experiment (CLPX)

    Atmospheric Science Data Center

    2013-04-02

    ... MISR Browse Images: Cold Land Processes Experiment (CLPX) These MISR Browse images provide a ... over the region observed during the NASA Cold Land Processes Experiment (CLPX). CLPX involved ground, airborne, and satellite measurements ...

  6. Digital image processing of cephalometric radiographs: a preliminary report.

    PubMed

    Jackson, P H; Dickson, G C; Birnie, D J

    1985-07-01

    The principles of image capture, image storage and image processing in digital radiology are described. The enhancement of radiographic images using digital image processing techniques and its application to cephalometry is discussed. The results of a pilot study which compared some common cephalometric measurements made from manual point identification with those made by direct digitization of digital radiographic images from video monitors are presented. Although in an early stage of development, the results from the image processing system were comparable with those obtained by traditional methods.

  7. ATM experiment S-056 image processing requirements definition

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A plan is presented for satisfying the image data processing needs of the S-056 Apollo Telescope Mount experiment. The report is based on information gathered from related technical publications, consultation with numerous image processing experts, and on the experience that was in working on related image processing tasks over a two-year period.

  8. Wavelet Transforms in Parallel Image Processing

    DTIC Science & Technology

    1994-01-27

    NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by

  9. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  10. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  11. Methods for processing and imaging marsh foraminifera

    USGS Publications Warehouse

    Dreher, Chandra A.; Flocks, James G.

    2011-01-01

    This study is part of a larger U.S. Geological Survey (USGS) project to characterize the physical conditions of wetlands in southwestern Louisiana. Within these wetlands, groups of benthic foraminifera-shelled amoeboid protists living near or on the sea floor-can be used as agents to measure land subsidence, relative sea-level rise, and storm impact. In the Mississippi River Delta region, intertidal-marsh foraminiferal assemblages and biofacies were established in studies that pre-date the 1970s, with a very limited number of more recent studies. This fact sheet outlines this project's improved methods, handling, and modified preparations for the use of Scanning Electron Microscope (SEM) imaging of these foraminifera. The objective is to identify marsh foraminifera to the taxonomic species level by using improved processing methods and SEM imaging for morphological characterization in order to evaluate changes in distribution and frequency relative to other environmental variables. The majority of benthic marsh foraminifera consists of agglutinated forms, which can be more delicate than porcelaneous forms. Agglutinated tests (shells) are made of particles such as sand grains or silt and clay material, whereas porcelaneous tests consist of calcite.

  12. Corn plant locating by image processing

    NASA Astrophysics Data System (ADS)

    Jia, Jiancheng; Krutz, Gary W.; Gibson, Harry W.

    1991-02-01

    The feasibility investigation of using machine vision technology to locate corn plants is an important issue for field production automation in the agricultural industry. This paper presents an approach which was developed to locate the center of a corn plant using image processing techniques. Corn plants were first identified using a main vein detection algorithm by detecting a local feature of corn leaves leaf main veins based on the spectral difference between mains and leaves then the center of the plant could be located using a center locating algorithm by tracing and extending each detected vein line and evaluating the center of the plant from intersection points of those lines. The experimental results show the usefulness of the algorithm for machine vision applications related to corn plant identification. Such a technique can be used for pre. cisc spraying of pesticides or biotech chemicals. 1.

  13. Intelligent elevator management system using image processing

    NASA Astrophysics Data System (ADS)

    Narayanan, H. Sai; Karunamurthy, Vignesh; Kumar, R. Barath

    2015-03-01

    In the modern era, the increase in the number of shopping malls and industrial building has led to an exponential increase in the usage of elevator systems. Thus there is an increased need for an effective control system to manage the elevator system. This paper is aimed at introducing an effective method to control the movement of the elevators by considering various cases where in the location of the person is found and the elevators are controlled based on various conditions like Load, proximity etc... This method continuously monitors the weight limit of each elevator while also making use of image processing to determine the number of persons waiting for an elevator in respective floors. Canny edge detection technique is used to find out the number of persons waiting for an elevator. Hence the algorithm takes a lot of cases into account and locates the correct elevator to service the respective persons waiting in different floors.

  14. Image processing and products for the Magellan mission to Venus

    NASA Technical Reports Server (NTRS)

    Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche

    1992-01-01

    The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.

  15. Natural Language Processing in the Molecular Imaging Domain

    PubMed Central

    Tulipano, P. Karina; Tao, Ying; Zanzonico, Pat; Kolbert, Katherine; Lussier, Yves; Friedman, Carol

    2005-01-01

    Molecular imaging represents the intersection between imaging and genomic sciences. There has been a surge in research literature and information in both sciences. Information contained within molecular imaging literature could be used to 1) link to genomic and imaging information resources and 2) to organize and index images. This research focuses on the adaptation, evaluation, and application of BioMedLEE, a natural language processing system (NLP), in the automated extraction of information from molecular imaging abstracts. PMID:16779429

  16. Low cost 3D scanning process using digital image processing

    NASA Astrophysics Data System (ADS)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  17. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  18. Spot restoration for GPR image post-processing

    DOEpatents

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  19. Vision-sensing image analysis for GTAW process control

    SciTech Connect

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  20. Processing of Image Data by Integrated Circuits

    NASA Technical Reports Server (NTRS)

    Armstrong, R. W.

    1985-01-01

    Sensors combined with logic and memory circuitry. Cross-correlation of two inputs accomplished by transversal filter. Position of image taken to point where image and template data yield maximum value correlation function. Circuit used for controlling robots, medical-image analysis, automatic vehicle guidance, and precise pointing of scientific cameras.

  1. Spiral computed tomographic scanning of the chest with three dimensional imaging in the diagnosis and management of paediatric intrathoracic airway obstruction.

    PubMed Central

    Sagy, M.; Poustchi-Amin, M.; Nimkoff, L.; Silver, P.; Shikowitz, M.; Leonidas, J. C.

    1996-01-01

    BACKGROUND: The usefulness of spiral computed tomographic (CT) scans of the chest with three dimensional imaging (3D-CT) of intrathoracic structures in the diagnosis and management of paediatric intrathoracic airway obstruction was assessed. METHODS: A retrospective review was made of five consecutive cases (age range six months to four years) admitted to the paediatric intensive care unit and paediatric radiology division of a tertiary care children's hospital with severe respiratory decompensation suspected of being caused by intrathoracic large airway obstruction. Under adequate sedation, the patients underwent high speed spiral CT scanning of the thorax. Non-ionic contrast solution was injected in two patients to demonstrate the anatomical relationship between the airway and the intrathoracic large vessels. Using computer software, three-dimensional images of intrathoracic structures were then reconstructed by the radiologist. RESULTS: In all five patients the imaging results were useful in directing the physician to the correct diagnosis and appropriate management. In one patient, who had undergone repair of tetralogy of Fallot with absent pulmonary valve, the 3D-CT image showed bilateral disruptions in the integrity of the tracheobronchial tree due to compression by a dilated pulmonary artery. This patient underwent pulmonary artery aneurysmorrhaphy and required continued home mechanical ventilation via tracheostomy. In three other patients with symptoms of lower airway obstruction the 3D-CT images showed significant stenosis in segments of the tracheobronchial tree in two of them, and subsequent bronchoscopy established a diagnosis of segmental bronchomalacia. These two patients required mechanical ventilation and distending pressure to relieve their bronchospasm. In another patient who had undergone surgical repair of intrathoracic tracheal stenosis three years prior to admission the 3D-CT scan ruled out restenosis as the reason for her acute respiratory

  2. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    PubMed

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  3. Three-dimensional CT angiography: a new technique for imaging microvascular anatomy.

    PubMed

    Tregaskiss, Ashley P; Goodwin, Adam N; Bright, Linda D; Ziegler, Craig H; Acland, Robert D

    2007-03-01

    To date there has been no satisfactory research method for imaging microvascular anatomy in three dimensions (3D). In this article we present a new technique that allows both qualitative and quantitative examination of the microvasculature in 3D. In 10 fresh cadavers (7 females, 3 males, mean age 68 years), selected arteries supplying the abdominal wall and back were injected with a lead oxide/gelatin contrast mixture. From these regions, 30 specimens were dissected free and imaged with a 16-slice spiral computed tomographic (CT) scanner. Using three-dimensional CT (3D-CT) angiography, reconstructions of the microvasculature of each specimen were produced and examined for their qualitative content. Two calibration tools were constructed to determine (1) the accuracy of linear measurements made with CT software tools, and (2) the smallest caliber blood vessel that is reliably represented on 3D-CT reconstructions. Three-dimensional CT angiography produced versatile, high quality angiograms of the microvasculature. Correlation between measurements made with electronic calipers and CT software tools was very high (Lin's concordance coefficient, 0.99 (95% CI 0.99-0.99)). The finest caliber of vessel reliably represented on the 3D-CT reconstructions was 0.4 mm internal diameter. In summary, 3D-CT angiography is a simple, accurate, and reproducible method that imparts a much improved perception of anatomy when compared with existing research methods. Measurement tools provide accurate quantitative data to aid vessel mapping and preoperative planning. Further work will be needed to explore the full utility of 3D-CT angiography in a clinical setting.

  4. Multiscale image processing and antiscatter grids in digital radiography.

    PubMed

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D

    2009-01-01

    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  5. Precision processing of earth image data

    NASA Technical Reports Server (NTRS)

    Bernstein, R.; Stierhoff, G. C.

    1976-01-01

    Precise corrections of Landsat data are useful for generating land-use maps, detecting various crops and determining their acreage, and detecting changes. The paper discusses computer processing and visualization techniques for Landsat data so that users can get more information from the imagery. The elementary unit of data in each band of each scene is the integrated value of intensity of reflected light detected in the field of view by each sensor. To develop the basic mathematical approach for precision correction of the data, differences between positions of ground control points on the reference map and the observed control points in the scene are used to evaluate the coefficients of cubic time functions of roll, pitch, and yaw, and a linear time function of altitude deviation from normal height above local earth's surface. The resultant equation, termed a mapping function, corrects the warped data image into one that approximates the reference map. Applications are discussed relative to shade prints, extraction of road features, and atlas of cities.

  6. A program for medical visualization and image processing.

    PubMed

    Zaffari, Carlos A; Zaffari, Paulo; de Azevedo, Dario F G; Russomano, Thais; Helegda, Sergio; Figueira, Marcio V

    2006-01-01

    This article presents a software program for visualization and processing of medical images. It provides an expansible set of techniques to help extracting visual information from medical images to be used in diagnosis support and in advanced scientific investigations.

  7. Viewpoints on Medical Image Processing: From Science to Application

    PubMed Central

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  8. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  9. Image processing software for imaging spectrometry data analysis

    NASA Astrophysics Data System (ADS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-02-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  10. Image-processing pipelines: applications in magnetic resonance histology

    NASA Astrophysics Data System (ADS)

    Johnson, G. Allan; Anderson, Robert J.; Cook, James J.; Long, Christopher; Badea, Alexandra

    2016-03-01

    Image processing has become ubiquitous in imaging research—so ubiquitous that it is easy to loose track of how diverse this processing has become. The Duke Center for In Vivo Microscopy has pioneered the development of Magnetic Resonance Histology (MRH), which generates large multidimensional data sets that can easily reach into the tens of gigabytes. A series of dedicated image-processing workstations and associated software have been assembled to optimize each step of acquisition, reconstruction, post-processing, registration, visualization, and dissemination. This talk will describe the image-processing pipelines from acquisition to dissemination that have become critical to our everyday work.

  11. Bessel filters applied in biomedical image processing

    NASA Astrophysics Data System (ADS)

    Mesa Lopez, Juan Pablo; Castañeda Saldarriaga, Diego Leon

    2014-06-01

    A magnetic resonance is an image obtained by means of an imaging test that uses magnets and radio waves to create body images, however, in some images it's difficult to recognize organs or foreign agents present in the body. With these Bessel filters the objective is to significantly increase the resolution of magnetic resonance images taken to make them much clearer in order to detect anomalies and diagnose the illness. As it's known, Bessel filters appear to solve the Schrödinger equation for a particle enclosed in a cylinder and affect the image distorting the colors and contours of it, therein lies the effectiveness of these filters, since the clear outline shows more defined and easy to recognize abnormalities inside the body.

  12. Image processing techniques for digital orthophotoquad production

    USGS Publications Warehouse

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  13. An integral design strategy combining optical system and image processing to obtain high resolution images

    NASA Astrophysics Data System (ADS)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  14. An image processing system for digital chest X-ray images.

    PubMed

    Cocklin, M; Gourlay, A; Jackson, P; Kaye, G; Miessler, M; Kerr, I; Lams, P

    1984-01-01

    This paper investigates the requirements for image processing of digital chest X-ray images. These images are conventionally recorded on film and are characterised by large size, wide dynamic range and high resolution. X-ray detection systems are now becoming available for capturing these images directly in photoelectronic-digital form. In this report, the hardware and software facilities required for handling these images are described. These facilities include high resolution digital image displays, programmable video look up tables, image stores for image capture and processing and a full range of software tools for image manipulation. Examples are given of the application of digital image processing techniques to this class of image.

  15. An Image Processing Algorithm Based On FMAT

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  16. Improving night sky star image processing algorithm for star sensors.

    PubMed

    Arbabmir, Mohammad Vali; Mohammadi, Seyyed Mohammad; Salahshour, Sadegh; Somayehee, Farshad

    2014-04-01

    In this paper, the night sky star image processing algorithm, consisting of image preprocessing, star pattern recognition, and centroiding steps, is improved. It is shown that the proposed noise reduction approach can preserve more necessary information than other frequently used approaches. It is also shown that the proposed thresholding method unlike commonly used techniques can properly perform image binarization, especially in images with uneven illumination. Moreover, the higher performance rate and lower average centroiding estimation error of near 0.045 for 400 simulated images compared to other algorithms show the high capability of the proposed night sky star image processing algorithm.

  17. DTV color and image processing: past, present, and future

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Yeong; Lee, SeongDeok; Park, Du-Sik; Kwak, Youngshin

    2006-01-01

    The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.

  18. Cardiovascular Imaging and Image Processing: Theory and Practice - 1975

    NASA Technical Reports Server (NTRS)

    Harrison, Donald C. (Editor); Sandler, Harold (Editor); Miller, Harry A. (Editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

    1975-01-01

    Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

  19. Cardiovascular Imaging and Image Processing: Theory and Practice - 1975

    NASA Technical Reports Server (NTRS)

    Harrison, Donald C. (Editor); Sandler, Harold (Editor); Miller, Harry A. (Editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

    1975-01-01

    Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

  20. Survey on Neural Networks Used for Medical Image Processing.

    PubMed

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-02-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques.

  1. Medical image processing on the GPU - past, present and future.

    PubMed

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges.

  2. [Embedded system design of color-blind image processing].

    PubMed

    Wang, Eric; Ma, Yu; Wang, Yuanyuan

    2011-01-01

    An ARM-based embedded system design schemes is proposed for the color-blind image processing system. The hardware and software of the embedded color-blind image processing system are designed using ARM core processor. Besides, a simple and convenient interface is implemented. This system supplies a general hardware platform for the applications of color-blind image processing algorithms, so that it may bring convenience for the test and rectification of color blindness.

  3. Design of a distributed CORBA based image processing server.

    PubMed

    Giess, C; Evers, H; Heid, V; Meinzer, H P

    2000-01-01

    This paper presents the design and implementation of a distributed image processing server based on CORBA. Existing image processing tools were encapsulated in a common way with this server. Data exchange and conversion is done automatically inside the server, hiding these tasks from the user. The different image processing tools are visible as one large collection of algorithms and due to the use of CORBA are accessible via intra-/internet.

  4. Principles of cryo-EM single-particle image processing

    PubMed Central

    Sigworth, Fred J.

    2016-01-01

    Single-particle reconstruction is the process by which 3D density maps are obtained from a set of low-dose cryo-EM images of individual macromolecules. This review considers the fundamental principles of this process and the steps in the overall workflow for single-particle image processing. Also considered are the limits that image signal-to-noise ratio places on resolution and the distinguishing of heterogeneous particle populations. PMID:26705325

  5. On digital image processing technology and application in geometric measure

    NASA Astrophysics Data System (ADS)

    Yuan, Jiugen; Xing, Ruonan; Liao, Na

    2014-04-01

    Digital image processing technique is an emerging science that emerging with the development of semiconductor integrated circuit technology and computer science technology since the 1960s.The article introduces the digital image processing technique and principle during measuring compared with the traditional optical measurement method. It takes geometric measure as an example and introduced the development tendency of digital image processing technology from the perspective of technology application.

  6. Optimizing signal and image processing applications using Intel libraries

    NASA Astrophysics Data System (ADS)

    Landré, Jérôme; Truchetet, Frédéric

    2007-01-01

    This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

  7. Applications of digital image processing IX

    SciTech Connect

    Tescher, A.G.

    1986-01-01

    This book contains the proceedings of SPIE - The International Society for Optical Engineering. The first session covers image compression and includes papers such as ''Knowledge-based image bandwidth compression.'' Session two is about instrumentation such as ''Real-time inspection of currency'' and ''Experimental digital image processor.'' Session three discusses theoretical concepts such as ''Study of texture segmentation.'' Session four is about algorithms. One such topic is ''Dynamic ordered dither algorithm.'' Session five covers registration and modeling. For example, one paper is ''3D-motion estimation from projections.'' Session six is about restoration and enhancement. Papers include ''Wobble error correction for laser scanners'' and ''Robotics with computer Vision.''

  8. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  9. Quantitative high spatiotemporal imaging of biological processes

    NASA Astrophysics Data System (ADS)

    Borbely, Joe; Otterstrom, Jason; Mohan, Nitin; Manzo, Carlo; Lakadamyali, Melike

    2015-08-01

    Super-resolution microscopy has revolutionized fluorescence imaging providing access to length scales that are much below the diffraction limit. The super-resolution methods have the potential for novel discoveries in biology. However, certain technical limitations must be overcome for this potential to be fulfilled. One of the main challenges is the use of super-resolution to study dynamic events in living cells. In addition, the ability to extract quantitative information from the super-resolution images is confounded by the complex photophysics that the fluorescent probes exhibit during the imaging. Here, we will review recent developments we have been implementing to overcome these challenges and introduce new steps in automated data acquisition towards high-throughput imaging.

  10. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  11. Experiences with digital processing of images at INPE

    NASA Technical Reports Server (NTRS)

    Mascarenhas, N. D. A. (Principal Investigator)

    1984-01-01

    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  12. Sliding mean edge estimation. [in digital image processing

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1978-01-01

    A method for determining the locations of the major edges of objects in digital images is presented. The method is based on an algorithm utilizing maximum likelihood concepts. An image line-scan interval is processed to determine if an edge exists within the interval and its location. The proposed algorithm has demonstrated good results even in noisy images.

  13. A color image processing pipeline for digital microscope

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

    2012-10-01

    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  14. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  15. Post-processing for statistical image analysis in light microscopy.

    PubMed

    Cardullo, Richard A; Hinchcliffe, Edward H

    2013-01-01

    Image processing of images serves a number of important functions including noise reduction, contrast enhancement, and feature extraction. Whatever the final goal, an understanding of the nature of image acquisition and digitization and subsequent mathematical manipulations of that digitized image is essential. Here we discuss the basic mathematical and statistical processes that are routinely used by microscopists to routinely produce high quality digital images and to extract key features of interest using a variety of extraction and thresholding tools. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Breast image pre-processing for mammographic tissue segmentation.

    PubMed

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Visualization and processing of images in nano-resolution

    NASA Astrophysics Data System (ADS)

    Vozenilek, Vit; Pour, Tomas

    2017-02-01

    The paper aims to apply the methods of image processing which are widely used in Earth remote sensing for processing and visualization of images in nano-resolution because most of these images are currently analyzed only by an expert researcher without proper statistical background. Nano-resolution level may range from a resolution in picometres to the resolution of a light microscope that may be up to about 200 nanometers. Images in nano-resolution play an essential role in physics, medicine, and chemistry. Three case studies demonstrate different image visualization and image analysis approaches for different scales at the nano-resolution level. The results of case studies prove the suitability and applicability of Earth remote sensing methods for image visualization and processing for the nanoresolution level. It even opens new dimensions for spatial analysis at such an extreme spatial detail.

  18. High performance image processing and laser beam recording system

    NASA Astrophysics Data System (ADS)

    Fanelli, A. R.

    1981-06-01

    A high-performance image processing system which includes a laser image recorder has been developed to cover a full range of digital image processing techniques and capabilities. The Digital Interactive Image Processing System (DIIPS) consists of an HP3000 series II computer and subsystems consisting of a high-speed array processor, a high-speed tape drive, a series display system, a stereo optics viewing position, a printer/plotter and a CPU link which provides the capacity for the mensuration and exploitation of digital imagery with both mono and stereo digital images. Software employed includes the Hewlett-Packard standard system software composed of operating system, utilities, compilers and standard function library packages, the standard IDIMS software, and specially developed software relating to photographic and stereo mensuration. The Ultra High Resolution Image Recorder is a modification of a standard laser beam recorder with a capability of recording in excess of 18 K pixels per image line.

  19. The Development of Sun-Tracking System Using Image Processing

    PubMed Central

    Lee, Cheng-Dar; Huang, Hong-Cheng; Yeh, Hong-Yih

    2013-01-01

    This article presents the development of an image-based sun position sensor and the algorithm for how to aim at the Sun precisely by using image processing. Four-quadrant light sensors and bar-shadow photo sensors were used to detect the Sun's position in the past years. Nevertheless, neither of them can maintain high accuracy under low irradiation conditions. Using the image-based Sun position sensor with image processing can address this drawback. To verify the performance of the Sun-tracking system including an image-based Sun position sensor and a tracking controller with embedded image processing algorithm, we established a Sun image tracking platform and did the performance testing in the laboratory; the results show that the proposed Sun tracking system had the capability to overcome the problem of unstable tracking in cloudy weather and achieve a tracking accuracy of 0.04°. PMID:23615582

  20. Using quantum filters to process images of diffuse axonal injury

    NASA Astrophysics Data System (ADS)

    Pineda Osorio, Mateo

    2014-06-01

    Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

  1. Wavelet-Based Signal and Image Processing for Target Recognition

    DTIC Science & Technology

    2002-01-01

    in target recognition applications. Classical spatial and frequency domain image processing algorithms were generalized to process discrete wavelet ... transform (DWT) data. Results include adaptation of classical filtering, smoothing and interpolation techniques to DWT. From 2003 the research

  2. Mathematical Morphology Techniques For Image Processing Applications In Biomedical Imaging

    NASA Astrophysics Data System (ADS)

    Bartoo, Grace T.; Kim, Yongmin; Haralick, Robert M.; Nochlin, David; Sumi, Shuzo M.

    1988-06-01

    Mathematical morphology operations allow object identification based on shape and are useful for grouping a cluster of small objects into one object. Because of these capabilities, we have implemented and evaluated this technique for our study of Alzheimer's disease. The microscopic hallmark of Alzheimer's disease is the presence of brain lesions known as neurofibrillary tangles and senile plaques. These lesions have distinct shapes compared to normal brain tissue. Neurofibrillary tangles appear as flame-shaped structures, whereas senile plaques appear as circular clusters of small objects. In order to quantitatively analyze the distribution of these lesions, we have developed and applied the tools of mathematical morphology on the Pixar Image Computer. As a preliminary test of the accuracy of the automatic detection algorithm, a study comparing computer and human detection of senile plaques was performed by evaluating 50 images from 5 different patients. The results of this comparison demonstrates that the computer counts correlate very well with the human counts (correlation coefficient = .81). Now that the basic algorithm has been shown to work, optimization of the software will be performed to improve its speed. Also future improvements such as local adaptive thresholding will be made to the image analysis routine to further improve the systems accuracy.

  3. Image data processing of earth resources management. [technology transfer

    NASA Technical Reports Server (NTRS)

    Desio, A. W.

    1974-01-01

    Various image processing and information extraction systems are described along with the design and operation of an interactive multispectral information system, IMAGE 100. Analyses of ERTS data, using IMAGE 100, over a number of U.S. sites are presented. The following analyses are included: investigations of crop inventory and management using remote sensing; and (2) land cover classification for environmental impact assessments. Results show that useful information is provided by IMAGE 100 analyses of ERTS data in digital form.

  4. Pyramidal Image-Processing Code For Hexagonal Grid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1990-01-01

    Algorithm based on processing of information on intensities of picture elements arranged in regular hexagonal grid. Called "image pyramid" because image information at each processing level arranged in hexagonal grid having one-seventh number of picture elements of next lower processing level, each picture element derived from hexagonal set of seven nearest-neighbor picture elements in next lower level. At lowest level, fine-resolution of elements of original image. Designed to have some properties of image-coding scheme of primate visual cortex.

  5. High resolution image processing on low-cost microcomputers

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1993-01-01

    Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

  6. Pyramidal Image-Processing Code For Hexagonal Grid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1990-01-01

    Algorithm based on processing of information on intensities of picture elements arranged in regular hexagonal grid. Called "image pyramid" because image information at each processing level arranged in hexagonal grid having one-seventh number of picture elements of next lower processing level, each picture element derived from hexagonal set of seven nearest-neighbor picture elements in next lower level. At lowest level, fine-resolution of elements of original image. Designed to have some properties of image-coding scheme of primate visual cortex.

  7. IPL Processing of the Viking Orbiter Images of Mars

    NASA Technical Reports Server (NTRS)

    Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.

    1977-01-01

    The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.

  8. IPL Processing of the Viking Orbiter Images of Mars

    NASA Technical Reports Server (NTRS)

    Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.

    1977-01-01

    The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.

  9. Monitoring Car Drivers' Condition Using Image Processing

    NASA Astrophysics Data System (ADS)

    Adachi, Kazumasa; Yamamto, Nozomi; Yamamoto, Osami; Nakano, Tomoaki; Yamamoto, Shin

    We have developed a car driver monitoring system for measuring drivers' consciousness, with which we aim to reduce car accidents caused by drowsiness of drivers. The system consists of the following three subsystems: an image capturing system with a pulsed infrared CCD camera, a system for detecting blinking waveform by the images using a neural network with which we can extract images of face and eye areas, and a system for measuring drivers' consciousness analyzing the waveform with a fuzzy inference technique and others. The third subsystem extracts three factors from the waveform first, and analyzed them with a statistical method, while our previous system used only one factor. Our experiments showed that the three-factor method we used this time was more effective to measure drivers' consciousness than the one-factor method we described in the previous paper. Moreover, the method is more suitable for fitting parameters of the system to each individual driver.

  10. Protocols for Image Processing based Underwater Inspection of Infrastructure Elements

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; Pakrashi, Vikram

    2015-07-01

    Image processing can be an important tool for inspecting underwater infrastructure elements like bridge piers and pile wharves. Underwater inspection often relies on visual descriptions of divers who are not necessarily trained in specifics of structural degradation and the information may often be vague, prone to error or open to significant variation of interpretation. Underwater vehicles, on the other hand can be quite expensive to deal with for such inspections. Additionally, there is now significant encouragement globally towards the deployment of more offshore renewable wind turbines and wave devices and the requirement for underwater inspection can be expected to increase significantly in the coming years. While the merit of image processing based assessment of the condition of underwater structures is understood to a certain degree, there is no existing protocol on such image based methods. This paper discusses and describes an image processing protocol for underwater inspection of structures. A stereo imaging image processing method is considered in this regard and protocols are suggested for image storage, imaging, diving, and inspection. A combined underwater imaging protocol is finally presented which can be used for a variety of situations within a range of image scenes and environmental conditions affecting the imaging conditions. An example of detecting marine growth is presented of a structure in Cork Harbour, Ireland.

  11. Functional minimization problems in image processing

    NASA Astrophysics Data System (ADS)

    Kim, Yunho; Vese, Luminita A.

    2008-02-01

    In this work we wish to recover an unknown image from a blurry version. We solve this inverse problem by energy minimization and regularization. We seek a solution of the form u + v, where u is a function of bounded variation (cartoon component), while v is an oscillatory component (texture), modeled by a Sobolev function with negative degree of differentiability. Experimental results show that this cartoon + texture model better recovers textured details in natural images, by comparison with the more standard models where the unknown is restricted only to the space of functions of bounded variation.

  12. From Image to Text: Using Images in the Writing Process

    ERIC Educational Resources Information Center

    Andrzejczak, Nancy; Trainin, Guy; Poldberg, Monique

    2005-01-01

    This study looks at the benefits of integrating visual art creation and the writing process. The qualitative inquiry uses student, parent, and teacher interviews coupled with field observation, and artifact analysis. Emergent coding based on grounded theory clearly shows that visual art creation enhances the writing process. Students used more…

  13. Digital image processing software system using an array processor

    SciTech Connect

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-03-10

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table.

  14. Image Processing In Laser-Beam-Steering Subsystem

    NASA Technical Reports Server (NTRS)

    Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.

    1996-01-01

    Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

  15. Restoration Of Faded Color Photographs By Digital Image Processing

    NASA Astrophysics Data System (ADS)

    Gschwind, Rudolf

    1989-10-01

    Color photographs possess a poor stability towards light, chemicals heat and humidity. As a consequence, the colors of photographs deteriorate with time. Because of the complexity of processes that cause the dyes to fade, it is impossible to restore the images by chemical means. It is therefore attempted to restore faded color films by means of digital image processing.

  16. Image Processing In Laser-Beam-Steering Subsystem

    NASA Technical Reports Server (NTRS)

    Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.

    1996-01-01

    Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

  17. Digital processing of radiographic images from PACS to publishing.

    PubMed

    Christian, M E; Davidson, H C; Wiggins, R H; Berges, G; Cannon, G; Jackson, G; Chapman, B; Harnsberger, H R

    2001-03-01

    Several studies have addressed the implications of filmless radiologic imaging on telemedicine, diagnostic ability, and electronic teaching files. However, many publishers still require authors to submit hard-copy images for publication of articles and textbooks. This study compares the quality digital images directly exported from picture archive and communications systems (PACS) to images digitized from radiographic film. The authors evaluated the quality of publication-grade glossy photographs produced from digital radiographic images using 3 different methods: (1) film images digitized using a desktop scanner and then printed, (2) digital images obtained directly from PACS then printed, and (3) digital images obtained from PACS and processed to improve sharpness prior to printing. Twenty images were printed using each of the 3 different methods and rated for quality by 7 radiologists. The results were analyzed for statistically significant differences among the image sets. Subjective evaluations of the filmless images found them to be of equal or better quality than the digitized images. Direct electronic transfer of PACS images reduces the number of steps involved in creating publication-quality images as well as providing the means to produce high-quality radiographic images in a digital environment.

  18. High-performance image processing on the desktop

    NASA Astrophysics Data System (ADS)

    Jordan, Stephen D.

    1996-04-01

    The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.

  19. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  20. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  1. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software

  2. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. TULIPS: the Uppsala-Linkoping Image Processing System.

    PubMed

    Holmquist, J; Antonsson, D; Bengtsson, E; Danielsson, P E; Eriksson, O; Hedblom, T; Mårtensson, A; Nordin, B; Olsson, T; Stenkvist, B

    1981-09-01

    The Uppsala-Linkoping Image Processing System, TULIPS, is described. TULIPS, a hardware-software system designed for cell image processing, was developed at Uppsala University Hospital in cooperation with the Department of Electrical Engineering at Linkoping University. The hardware part of the image processing system is built around a high-speed data bus with a capacity of about 40 M byte/sec connected to a PDP-11/55 host computer. An image memory, an LSI-11 microcomputer and a video interface for displaying the image memory content on a TV monitor are also connected to the high-speed bus. An automated microscope and a "Poulsen processor" for low resolution segmentation, both to be attached to the high-speed bus, are being developed. A monitor and an interpreter for an image processing language have been implemented on the host computer. This software system allows interactive, as well as batch, processing. The degree of user interaction is easily adapted to the user's needs. The image processing language is command oriented, and it is easily expanded by adding new commands. The system has been used both for studies in the field of quantitative microscopy and as a platform for development and testing of new image processing algorithms.

  4. Assessment of vessel diameters for MR brain angiography processed images

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian-Dragos; Moldovanu, Simona

    2015-12-01

    The motivation was to develop an assessment method to measure (in)visible differences between the original and the processed images in MR brain angiography as a method of evaluation of the status of the vessel segments (i.e. the existence of the occlusion or intracerebral vessels damaged as aneurysms). Generally, the image quality is limited, so we improve the performance of the evaluation through digital image processing. The goal is to determine the best processing method that allows an accurate assessment of patients with cerebrovascular diseases. A total of 10 MR brain angiography images were processed by the following techniques: histogram equalization, Wiener filter, linear contrast adjustment, contrastlimited adaptive histogram equalization, bias correction and Marr-Hildreth filter. Each original image and their processed images were analyzed into the stacking procedure so that the same vessel and its corresponding diameter have been measured. Original and processed images were evaluated by measuring the vessel diameter (in pixels) on an established direction and for the precise anatomic location. The vessel diameter is calculated using the plugin ImageJ. Mean diameter measurements differ significantly across the same segment and for different processing techniques. The best results are provided by the Wiener filter and linear contrast adjustment methods and the worst by Marr-Hildreth filter.

  5. Stature estimation from skull measurements using multidetector computed tomographic images: A Japanese forensic sample.

    PubMed

    Torimitsu, Suguru; Makino, Yohsuke; Saitoh, Hisako; Sakuma, Ayaka; Ishii, Namiko; Yajima, Daisuke; Inokuchi, Go; Motomura, Ayumi; Chiba, Fumiko; Yamaguchi, Rutsuko; Hashimoto, Mari; Hoshioka, Yumi; Iwase, Hirotaro

    2016-01-01

    The aim of this study was to assess the correlation between stature and cranial measurements in a contemporary Japanese population, using three-dimensional (3D) computed tomographic (CT) images. A total of 228 cadavers (123 males, 105 females) underwent postmortem CT scanning and subsequent forensic autopsy between May 2011 and April 2015. Five cranial measurements were taken from 3D CT reconstructed images that extracted only cranial data. The correlations between stature and each of the cranial measurements were assessed with Pearson product-moment correlation coefficients. Simple and multiple regression analyses showed significant correlations between stature and cranial measurements. In conclusion, cranial measurements obtained from 3D CT images may be useful for forensic estimation of the stature of Japanese individuals, particularly in cases where better predictors, such as long bones, are not available. Copyright © 2015. Published by Elsevier Ireland Ltd.

  6. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  7. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  8. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  9. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  10. Method of detecting meter base on image-processing

    NASA Astrophysics Data System (ADS)

    Wang, Hong-ping; Wang, Peng; Yu, Zheng-lin

    2008-03-01

    This paper proposes a new idea of detecting meter using image arithmetic- logic operation and high-precision raster sensor. This method regards the data measured by precision raster as real value, the data obtained by digital image-processing as measuring value, and achieves the aim of detecting meter through the compare of above two datum finally. This method utilizes the dynamic change of meter pointer to complete subtraction processing of image, to realize image segmentation, and to achieve warp-value of image pointer of border time. This method using the technology of image segmentation replaces the traditional method which is low accuracy and low repetition caused by manual operation and ocular reading. Its precision reaches technology index demand according to the arithmetic of nation detecting rules and experiment indicates it is reliable, high accuracy. The paper introduces the total scheme of detecting meter, capturing method of image pointer, and also shows the precision analysis of indicating value error.

  11. Dynamic feature analysis for Voyager at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Yagi, G. M.; Lorre, J. J.; Jepsen, P. L.

    1978-01-01

    Voyager 1 and 2 were launched from Cape Kennedy to Jupiter, Saturn, and beyond on September 5, 1977 and August 20, 1977. The role of the Image Processing Laboratory is to provide the Voyager Imaging Team with the necessary support to identify atmospheric features (tiepoints) for Jupiter and Saturn data, and to analyze and display them in a suitable form. This support includes the software needed to acquire and store tiepoints, the hardware needed to interactively display images and tiepoints, and the general image processing environment necessary for decalibration and enhancement of the input images. The objective is an understanding of global circulation in the atmospheres of Jupiter and Saturn. Attention is given to the Voyager imaging subsystem, the Voyager imaging science objectives, hardware, software, display monitors, a dynamic feature study, decalibration, navigation, and data base.

  12. [Generation and processing of digital images in radiodiagnosis].

    PubMed

    Bajla, I; Belan, V

    1993-05-01

    The paper describes universal principles of diagnostic imaging. The attention is focused particularly on digital image generation in medicine. The methodology of display visualization of measured data is discussed. The problems of spatial relation representation and visual perception of image brightness are mentioned. The methodological issues of digital image processing (DIP) are discussed, particularly the relation of DIP to the other related disciplines, fundamental tasks in DIP and classification of DIP operations from the computational viewpoint. The following examples of applying DIP operations in diagnostic radiology are overviewed: local contrast enhancement in digital image, spatial filtering, quantitative texture analysis, synthesis of the 3D pseudospatial image based on the 2D tomogram set, multimodal processing of medical images. New trends of application of DIP methods in diagnostic radiology are outlined: evaluation of the diagnostic efficiency of DIP operations by means of ROC analysis, construction of knowledge-based systems of DIP in medicine. (Fig. 12, Ref. 26.)

  13. Acquisition and Post-Processing of Immunohistochemical Images.

    PubMed

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  14. IR camera system with an advanced image processing technologies

    NASA Astrophysics Data System (ADS)

    Ohkubo, Syuichi; Tamura, Tetsuo

    2016-05-01

    We have developed image processing technologies for resolving issues caused by the inherent UFPA (uncooled focal plane array) sensor characteristics to spread its applications. For example, large time constant of an uncooled IR (infra-red) sensor limits its application field, because motion blur is caused in monitoring the objective moving at high speed. The developed image processing technologies can eliminate the blur and retrieve almost the equivalent image observed in still motion. This image processing is based on the idea that output of the IR sensor is construed as the convolution of radiated IR energy from the objective and impulse response of the IR sensor. With knowledge of the impulse response and moving speed of the objective, the IR energy from the objective can be de-convolved from the observed images. We have successfully retrieved the image without blur using the IR sensor of 15 ms time constant under the conditions in which the objective is moving at the speed of about 10 pixels/60 Hz. The image processing for reducing FPN (fixed pattern noise) has also been developed. UFPA having the responsivity in the narrow wavelength region, e.g., around 8 μm is appropriate for measuring the surface of glass. However, it suffers from severe FPN due to lower sensitivity compared with 8-13 μm. The developed image processing exploits the images of the shutter itself, and can reduce FPN significantly.

  15. Processing of polarametric SAR images. Final report

    SciTech Connect

    Warrick, A.L.; Delaney, P.A.

    1995-09-01

    The objective of this work was to develop a systematic method of combining multifrequency polarized SAR images. It is shown that the traditional methods of correlation, hard targets, and template matching fail to produce acceptable results. Hence, a new algorithm was developed and tested. The new approach combines the three traditional methods and an interpolation method. An example is shown that demonstrates the new algorithms performance. The results are summarized suggestions for future research are presented.

  16. Image processing methods to obtain symmetrical distribution from projection image.

    PubMed

    Asano, H; Takenaka, N; Fujii, T; Nakamatsu, E; Tagami, Y; Takeshima, K

    2004-10-01

    Flow visualization and measurement of cross-sectional liquid distribution is very effective to clarify the effects of obstacles in a conduit on heat transfer and flow characteristics of gas-liquid two-phase flow. In this study, two methods to obtain cross-sectional distribution of void fraction are applied to vertical upward air-water two-phase flow. These methods need projection image only from one direction. Radial distributions of void fraction in a circular tube and a circular-tube annuli with a spacer were calculated by Abel transform based on the assumption of axial symmetry. On the other hand, cross-sectional distributions of void fraction in a circular tube with a wire coil whose conduit configuration rotates about the tube central axis periodically were measured by CT method based on the assumption that the relative distributions of liquid phase against the wire were kept along the flow direction.

  17. Processing ISS Images of Titan's Surface

    NASA Technical Reports Server (NTRS)

    Perry, Jason; McEwen, Alfred; Fussner, Stephanie; Turtle, Elizabeth; West, Robert; Porco, Carolyn; Knowles, Ben; Dawson, Doug

    2005-01-01

    One of the primary goals of the Cassini-Huygens mission, in orbit around Saturn since July 2004, is to understand the surface and atmosphere of Titan. Surface investigations are primarily accomplished with RADAR, the Visual and Infrared Mapping Spectrometer (VIMS), and the Imaging Science Subsystem (ISS) [1]. The latter two use methane "windows", regions in Titan's reflectance spectrum where its atmosphere is most transparent, to observe the surface. For VIMS, this produces clear views of the surface near 2 and 5 microns [2]. ISS uses a narrow continuum band filter (CB3) at 938 nanometers. While these methane windows provide our best views of the surface, the images produced are not as crisp as ISS images of satellites like Dione and Iapetus [3] due to the atmosphere. Given a reasonable estimate of contrast (approx.30%), the apparent resolution of features is approximately 5 pixels due to the effects of the atmosphere and the Modulation Transfer Function of the camera [1,4]. The atmospheric haze also reduces contrast, especially with increasing emission angles [5].

  18. Image processing of underwater multispectral imagery

    USGS Publications Warehouse

    Zawada, D. G.

    2003-01-01

    Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.

  19. Image processing system to analyze droplet distributions in sprays

    NASA Technical Reports Server (NTRS)

    Bertollini, Gary P.; Oberdier, Larry M.; Lee, Yong H.

    1987-01-01

    An image processing system was developed which automatically analyzes the size distributions in fuel spray video images. Images are generated by using pulsed laser light to freeze droplet motion in the spray sample volume under study. This coherent illumination source produces images which contain droplet diffraction patterns representing the droplets degree of focus. The analysis is performed by extracting feature data describing droplet diffraction patterns in the images. This allows the system to select droplets from image anomalies and measure only those droplets considered in focus. Unique features of the system are the totally automated analysis and droplet feature measurement from the grayscale image. The feature extraction and image restoration algorithms used in the system are described. Preliminary performance data is also given for two experiments. One experiment gives a comparison between a synthesized distribution measured manually and automatically. The second experiment compares a real spray distribution measured using current methods against the automatic system.

  20. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

    NASA Technical Reports Server (NTRS)

    Hong, Yie-Ming

    1973-01-01

    Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

  1. A model for simulation and processing of radar images

    NASA Technical Reports Server (NTRS)

    Stiles, J. A.; Frost, V. S.; Shanmugam, K. S.; Holtzman, J. C.

    1981-01-01

    A model for recording, processing, presentation, and analysis of radar images in digital form is presented. The observed image is represented as having two random components, one which models the variation due to the coherent addition of electromagnetic energy scattered from different objects in the illuminated areas. This component is referred to as fading. The other component is a representation of the terrain variation which can be described as the actual signal which the radar is attempting to measure. The combination of these two components provides a description of radar images as being the output of a linear space-variant filter operating on the product of the fading and terrain random processes. In addition, the model is applied to a digital image processing problem using the design and implementation of enhancement scene. Finally, parallel approaches are being employed as possible means of solving other processing problems such as SAR image map-matching, data compression, and pattern recognition.

  2. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  3. Data management in pattern recognition and image processing systems

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  4. Data management in pattern recognition and image processing systems

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  5. Colorimetric Topography of Atherosclerotic Lesions by Television Image Processing

    DTIC Science & Technology

    1979-06-15

    thesis requires exposure of a grey scale. These exposures are bracketed to ± 3 f-stops centered at the meter-indicated exposure. The processed ...in atherogenesis. For this thesis , five specimens ofI’ -121- similar age, sex, and epidemiology were simulated and processed by the algorithms...6.1. Conclusion Employing the standard digitized image derived from the existing I theory of image processing , this thesis documents the development

  6. Performance evaluation of image processing algorithms in digital mammography

    NASA Astrophysics Data System (ADS)

    Zanca, Federica; Van Ongeval, Chantal; Jacobs, Jurgen; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2008-03-01

    The purpose of the study is to evaluate the performance of different image processing algorithms in terms of representation of microcalcification clusters in digital mammograms. Clusters were simulated in clinical raw ("for processing") images. The entire dataset of images consisted of 200 normal mammograms, selected out of our clinical routine cases and acquired with a Siemens Novation DR system. In 100 of the normal images a total of 142 clusters were simulated; the remaining 100 normal mammograms served as true negative input cases. Both abnormal and normal images were processed with 5 commercially available processing algorithms: Siemens OpView1 and Siemens OpView2, Agfa Musica1, Sectra Mamea AB Sigmoid and IMS Raffaello Mammo 1.2. Five observers were asked to locate and score the cluster(s) in each image, by means of dedicated software tool. Observer performance was assessed using the JAFROC Figure of Merit. FROC curves, fitted using the IDCA method, have also been calculated. JAFROC analysis revealed significant differences among the image processing algorithms in the detection of microcalcifications clusters (p=0.0000369). Calculated average Figures of Merit are: 0.758 for Siemens OpView2, 0.747 for IMS Processing 1.2, 0.736 for Agfa Musica1 processing, 0.706 for Sectra Mamea AB Sigmoid processing and 0.703 for Siemens OpView1. This study is a first step towards a quantitative assessment of image processing in terms of cluster detection in clinical mammograms. Although we showed a significant difference among the image processing algorithms, this method does not on its own allow for a global performance ranking of the investigated algorithms.

  7. Fast image processing on chain board of inverted tooth chain

    NASA Astrophysics Data System (ADS)

    Liu, Qing-min; Li, Guo-fa

    2007-12-01

    Discussed ordinary image processing technology of inverted tooth chain board, including noise reduction, image segmentation, edge detection and contour extraction etc.. Put forward a new kind of sub-pixel arithmetic for edge orientation of circle. The arithmetic first did edge detection to image by Canny arithmetic, so as to enhance primary orientation precision of edge, then calculated gradient direction, and then interpolated gradient image (image that was detected by Sobel arithmetic) along gradient direction, last obtained sub-pixel orientation of edge. Performed two kinds of least-square fitting methods for line edge to getting its sub-pixel orientation, from analysis and experiments, the orientation error of improved least-square linear fitting method was one quarter of ordinary least-square linear fitting error under small difference of orientation time. The sub-pixel orientation of circle made resolution of CCD increase 42 tines, which enhanced greatly orientation precision of image edge. For the need of quick on-line inspection next step, integrated the whole environment containing image preprocess, Hough conversion of line, setting orientation & direction of image, sub-pixel orientation of line and circle, output of calculation result. The whole quick processing course performed without operator, processing tine of single part is less than 0.3 second. The sub-pixel orientation method this paper posed fits precision orientation of image, and integration calculation method ensure requirement of quick inspection, and lays the foundations for on-line precision visual measurement of image.

  8. Principles of image processing in digital chest radiography.

    PubMed

    Prokop, Mathias; Neitzel, Ulrich; Schaefer-Prokop, Cornelia

    2003-07-01

    Image processing has a major impact on image quality and diagnostic performance of digital chest radiographs. Goals of processing are to reduce the dynamic range of the image data to capture the full range of attenuation differences between lungs and mediastinum, to improve the modulation transfer function to optimize spatial resolution, to enhance structural contrast, and to suppress image noise. Image processing comprises look-up table operations and spatial filtering. Look-up table operations allow for automated signal normalization and arbitrary choice of image gradation. The most simple and still widely applied spatial filtering algorithms are based on unsharp masking. Various modifications were introduced for dynamic range reduction and MTF restoration. More elaborate and more effective are multi-scale frequency processing algorithms. They are based on the subdivision of an image in multiple frequency bands according to its structural composition. This allows for a wide range of image manipulations including a size-independent enhancement of low-contrast structures. Principles of the various algorithms will be explained and their impact on image appearance will be illustrated by clinical examples. Optimum and sub-optimum parameter settings are discussed and pitfalls will be explained.

  9. An Image Processing Approach to Linguistic Translation

    NASA Astrophysics Data System (ADS)

    Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

    2011-12-01

    The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

  10. Detecting jaundice by using digital image processing

    NASA Astrophysics Data System (ADS)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  11. High performance image processing of SPRINT

    SciTech Connect

    DeGroot, T.

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  12. Automatic construction of image inspection algorithm by using image processing network programming

    NASA Astrophysics Data System (ADS)

    Yoshimura, Yuichiro; Aoki, Kimiya

    2017-03-01

    In this paper, we discuss a method for automatic programming of inspection image processing. In the industrial field, automatic program generators or expert systems are expected to shorten a period required for developing a new appearance inspection system. So-called "image processing expert system" have been studied for over the nearly 30 years. We are convinced of the need to adopt a new idea. Recently, a novel type of evolutionary algorithms, called genetic network programming (GNP), has been proposed. In this study, we use GNP as a method to create an inspection image processing logic. GNP develops many directed graph structures, and shows excellent ability of formulating complex problems. We have converted this network program model to Image Processing Network Programming (IPNP). IPNP selects an appropriate image processing command based on some characteristics of input image data and processing log, and generates a visual inspection software with series of image processing commands. It is verified from experiments that the proposed method is able to create some inspection image processing programs. In the basic experiment with 200 test images, the success rate of detection of target region was 93.5%.

  13. A novel stereoscopic projection display system for CT images of fractures.

    PubMed

    Liu, Xiujuan; Jiang, Hong; Lang, Yuedong; Wang, Hongbo; Sun, Na

    2013-06-01

    The present study proposed a novel projection display system based on a virtual reality enhancement environment. The proposed system displays stereoscopic images of fractures and enhances the computed tomography (CT) images. The diagnosis and treatment of fractures primarily depend on the post-processing of CT images. However, two-dimensional (2D) images do not show overlapping structures in fractures since they are displayed without visual depth and these structures are too small to be simultaneously observed by a group of clinicians. Stereoscopic displays may solve this problem and allow clinicians to obtain more information from CT images. Hardware with which to generate stereoscopic images was designed. This system utilized the conventional equipment found in meeting rooms. The off-axis algorithm was adopted to convert the CT images into stereo image pairs, which were used as the input for a stereo generator. The final stereoscopic images were displayed using a projection system. Several CT fracture images were imported into the system for comparison with traditional 2D CT images. The results showed that the proposed system aids clinicians in group discussions by producing large stereoscopic images. The results demonstrated that the enhanced stereoscopic CT images generated by the system appear clearer and smoother, such that the sizes, displacement and shapes of bone fragments are easier to assess. Certain fractures that were previously not visible on 2D CT images due to vision overlap became vividly evident in the stereo images. The proposed projection display system efficiently, economically and accurately displayed three-dimensional (3D) CT images. The system may help clinicians improve the diagnosis and treatment of fractures.

  14. A novel stereoscopic projection display system for CT images of fractures

    PubMed Central

    LIU, XIUJUAN; JIANG, HONG; LANG, YUEDONG; WANG, HONGBO; SUN, NA

    2013-01-01

    The present study proposed a novel projection display system based on a virtual reality enhancement environment. The proposed system displays stereoscopic images of fractures and enhances the computed tomography (CT) images. The diagnosis and treatment of fractures primarily depend on the post-processing of CT images. However, two-dimensional (2D) images do not show overlapping structures in fractures since they are displayed without visual depth and these structures are too small to be simultaneously observed by a group of clinicians. Stereoscopic displays may solve this problem and allow clinicians to obtain more information from CT images. Hardware with which to generate stereoscopic images was designed. This system utilized the conventional equipment found in meeting rooms. The off-axis algorithm was adopted to convert the CT images into stereo image pairs, which were used as the input for a stereo generator. The final stereoscopic images were displayed using a projection system. Several CT fracture images were imported into the system for comparison with traditional 2D CT images. The results showed that the proposed system aids clinicians in group discussions by producing large stereoscopic images. The results demonstrated that the enhanced stereoscopic CT images generated by the system appear clearer and smoother, such that the sizes, displacement and shapes of bone fragments are easier to assess. Certain fractures that were previously not visible on 2D CT images due to vision overlap became vividly evident in the stereo images. The proposed projection display system efficiently, economically and accurately displayed three-dimensional (3D) CT images. The system may help clinicians improve the diagnosis and treatment of fractures. PMID:23837053

  15. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  16. [Studies on digital watermark embedding intensity against image processing and image deterioration].

    PubMed

    Nishio, Masato; Ando, Yutaka; Tsukamoto, Nobuhiro; Kawashima, Hironao

    2004-04-01

    In order to apply digital watermarking to medical imaging, it is required to find a trade-off between strength of watermark embedding and deterioration of image quality. In this study, watermarks were embedded in 4 types of modality images to determine the correlation among the watermarking strength, robustness against image processing, and image deterioration due to embedding. The results demonstrated that watermarks which were embedded by the least significant bit insertion method became unable to be detected and recognized on image processing even if the watermarks were embedded with such strength that could cause image deterioration. On the other hand, watermarks embedded by the Discrete Cosine Transform were clearly detected and recognized even after image processing regardless of the embedding strength. The maximum level of embedding strength that will not affect diagnosis differed depending on the type of modality. It is expected that embedding the patient information together with the facility information as watermarks will help maintain the patient information, prevent mix-ups of the images, and identify the test performing facilities. The concurrent use of watermarking less resistant to image processing makes it possible to detect whether any image processing has been performed or not.

  17. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  18. Evaluation of clinical image processing algorithms used in digital mammography.

    PubMed

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  19. Photo-reconnaissance applications of computer processing of images.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1972-01-01

    Discussion of imaging processing techniques for enhancement and calibration of Jet Propulsion Laboratory imaging experiment pictures returned from NASA space vehicles such as Ranger, Mariner and Surveyor. Particular attention is given to data transmission, resolution vs recognition, and color aspects of digital data processing. The effectiveness of these techniques in applications to images from a wide variety of sources is noted. It is anticipated that the use of computer processing for enhancement of imagery will increase with the improvement and cost reduction of these techniques in the future.

  20. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1972-01-01

    When the first Earth Resources Technology Satellite (ERTS-A) flies in 1972, NASA expects to receive and bulk-process 9,000 images a week. From this deluge of images, a few will be selected for precision processing; that is, about 5 percent will be further treated to improve the geometry of the scene, both in the relative and absolute sense. Control points are required for this processing. This paper describes the control requirements for relating ERTS images to a reference surface of the earth. Enough background on the ERTS-A satellite is included to make the requirements meaningful to the user.

  1. Land image data processing requirements for the EOS era

    NASA Technical Reports Server (NTRS)

    Wharton, Stephen W.; Newcomer, Jeffrey A.

    1989-01-01

    Requirements are proposed for a hybrid approach to image analysis that combines the functionality of a general-purpose image processing system with the knowledge representation and manipulation capabilities associated with expert systems to improve the productivity of scientists in extracting information from remotely sensed image data. The overall functional objectives of the proposed system are to: (1) reduce the level of human interaction required on a scene-by-scene basis to perform repetitive image processing tasks; (2) allow the user to experiment with ad hoc rules and procedures for the extraction, description, and identification of the features of interest; and (3) facilitate the derivation, application, and dissemination of expert knowledge for target recognition whose scope of application is not necessarily limited to the image(s) from which it was derived.

  2. Application of three-dimensional computerised tomography reconstruction and image processing technology in individual operation design of developmental dysplasia of the hip patients.

    PubMed

    Xuyi, Wang; Jianping, Peng; Junfeng, Zhu; Chao, Shen; Yimin, Cui; Xiaodong, Chen

    2016-02-01

    significantly (p < 0.01). There was no statistically significant differences between LCEA, ACEA and AAVA after virtual Bernese PAO and normal hips (p = 0.06, p = 0.23, p = 0.06°, respectively). AASA improved significantly (p = 0.002) post-operatively at the cost of reducing posterior coverage represented by PASA, which is significantly smaller than in normal and pre-operative hips of DDH patients (p < 0.01). The geometric feature of the pelvis for patients with DDH can be assessed comprehensively by using 3D-CT reconstruction and image processing technology. Based on this method, surgeons can design individualised treatment scheme and improve the effect of PAO.

  3. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    PubMed

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of

  4. Imaging Implicit Morphological Processing: Evidence from Hebrew

    PubMed Central

    Bick, Atira S; Frost, Ram; Goelman, Gadi

    2013-01-01

    Is morphology a discrete and independent element of lexical structure or does it simply reflect a fine-tuning of the system to the statistical correlation that exists among orthographic and semantic properties of words? Hebrew provides a unique opportunity to examine morphological processing in the brain because of its rich morphological system. In an fMRI masked priming experiment we investigated the neural networks involved in implicit morphological processing in Hebrew. In the lMFG and lIFG, activation was found to be significantly reduced when the primes were morphologically related to the targets. This effect was not influenced by the semantic transparency of the morphological prime, and was not found in the semantic or orthographic condition. Additional morphologically related decrease in activation was found in the lIPL although there, activation was significantly modulated by semantic transparency. Our findings regarding implicit morphological processing suggest that morphology is an automatic and distinct aspect of visually processing words. These results also coincide with the behavioral data previously obtained demonstrating the central role of morphological processing in reading Hebrew. PMID:19803693

  5. Subband/Transform MATLAB Functions For Processing Images

    NASA Technical Reports Server (NTRS)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  6. Subband/Transform MATLAB Functions For Processing Images

    NASA Technical Reports Server (NTRS)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  7. Optical image processing by using a photorefractive spatial soliton waveguide

    NASA Astrophysics Data System (ADS)

    Liang, Bao-Lai; Wang, Ying; Zhang, Su-Heng; Guo, Qing-Lin; Wang, Shu-Fang; Fu, Guang-Sheng; Simmonds, Paul J.; Wang, Zhao-Qi

    2017-04-01

    By combining the photorefractive spatial soliton waveguide of a Ce:SBN crystal with a coherent 4-f system we are able to manipulate the spatial frequencies of an input optical image to perform edge-enhancement and direct component enhancement operations. Theoretical analysis of this optical image processor is presented to interpret the experimental observations. This work provides an approach for optical image processing by using photorefractive spatial solitons.

  8. Computer tomography imaging of fast plasmachemical processes

    SciTech Connect

    Denisova, N. V.; Katsnelson, S. S.; Pozdnyakov, G. A.

    2007-11-15

    Results are presented from experimental studies of the interaction of a high-enthalpy methane plasma bunch with gaseous methane in a plasmachemical reactor. The interaction of the plasma flow with the rest gas was visualized by using streak imaging and computer tomography. Tomography was applied for the first time to reconstruct the spatial structure and dynamics of the reagent zones in the microsecond range by the maximum entropy method. The reagent zones were identified from the emission of atomic hydrogen (the H{sub {alpha}} line) and molecular carbon (the Swan bands). The spatiotemporal behavior of the reagent zones was determined, and their relation to the shock-wave structure of the plasma flow was examined.

  9. Recent advances in imaging subcellular processes

    PubMed Central

    Myers, Kenneth A.; Janetopoulos, Christopher

    2016-01-01

    Cell biology came about with the ability to first visualize cells. As microscopy techniques advanced, the early microscopists became the first cell biologists to observe the inner workings and subcellular structures that control life. This ability to see organelles within a cell provided scientists with the first understanding of how cells function. The visualization of the dynamic architecture of subcellular structures now often drives questions as researchers seek to understand the intricacies of the cell. With the advent of fluorescent labeling techniques, better and new optical techniques, and more sensitive and faster cameras, a whole array of questions can now be asked. There has been an explosion of new light microscopic techniques, and the race is on to build better and more powerful imaging systems so that we can further our understanding of the spatial and temporal mechanisms controlling molecular cell biology. PMID:27408708

  10. Recent advances in imaging subcellular processes.

    PubMed

    Myers, Kenneth A; Janetopoulos, Christopher

    2016-01-01

    Cell biology came about with the ability to first visualize cells. As microscopy techniques advanced, the early microscopists became the first cell biologists to observe the inner workings and subcellular structures that control life. This ability to see organelles within a cell provided scientists with the first understanding of how cells function. The visualization of the dynamic architecture of subcellular structures now often drives questions as researchers seek to understand the intricacies of the cell. With the advent of fluorescent labeling techniques, better and new optical techniques, and more sensitive and faster cameras, a whole array of questions can now be asked. There has been an explosion of new light microscopic techniques, and the race is on to build better and more powerful imaging systems so that we can further our understanding of the spatial and temporal mechanisms controlling molecular cell biology.

  11. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Recent developments in neutron imaging with applications for porous media research

    NASA Astrophysics Data System (ADS)

    Kaestner, Anders P.; Trtik, Pavel; Zarebanadkouki, Mohsen; Kazantsev, Daniil; Snehota, Michal; Dobson, Katherine J.; Lehmann, Eberhard H.

    2016-09-01

    Computed tomography has become a routine method for probing processes in porous media, and the use of neutron imaging is especially suited to the study of the dynamics of hydrogenous fluids, and of fluids in a high-density matrix. In this paper we give an overview of recent developments in both instrumentation and methodology at the neutron imaging facilities NEUTRA and ICON at the Paul Scherrer Institut. Increased acquisition rates coupled to new reconstruction techniques improve the information output for fewer projection data, which leads to higher volume acquisition rates. Together, these developments yield significantly higher spatial and temporal resolutions, making it possible to capture finer details in the spatial distribution of the fluid, and to increase the acquisition rate of 3-D CT volumes. The ability to add a second imaging modality, e.g., X-ray tomography, further enhances the feature and process information that can be collected, and these features are ideal for dynamic experiments of fluid distribution in porous media. We demonstrate the performance for a selection of experiments carried out at our neutron imaging instruments.

  13. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  14. Predictive images of postoperative levator resection outcome using image processing software.

    PubMed

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop(®)). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  15. Application of image processing for terahertz time domain spectroscopy imaging quantitative detection

    NASA Astrophysics Data System (ADS)

    Li, Li-juan; Wang, Sheng; Ren, Jiao-jiao; Zhou, Ming-xing; Zhao, Duo

    2015-03-01

    According to nondestructive testing principle for the terahertz time domain spectroscopy Imaging, using digital image processing techniques, through Terahertz time-domain spectroscopy system collected images and two-dimensional datas and using a range of processing methods, including selecting regions of interest, contrast enhancement, edge detection, and defects being detected. In the paper, Matlab programming is been use to defect recognition of Terahertz, by figuring out the pixels to determine defects defect area and border length, roundness, diameter size. Through the experiment of the qualitative analysis and quantitative calculation of Matlab image processing, this method of detection of defects of geometric dimension of the sample to get a better result.

  16. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  17. A new method of SC image processing for confluence estimation.

    PubMed

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina

    2017-10-01

    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  19. Qualitative optimization of image processing systems using random set modeling

    NASA Astrophysics Data System (ADS)

    Kelly, Patrick A.; Derin, Haluk; Vaidya, Priya G.

    2000-08-01

    Many decision-making systems involve image processing that converts input sensor data into output images having desirable features. Typically, the system user selects some processing parameters. The processor together with the input image can then be viewed as a system that maps the processing parameters into output features. However, the most significant output features often are not numerical quantities, but instead are subjective measures of image quality. It can be a difficult task for a user to find the processing parameters that give the 'best' output. We wish to automate this qualitative optimization task. The key to this is incorporation linguistic operating rules and qualitative output parameters in a numerical optimization scheme. In this paper, we use the test system of input parameter selection for 2D Wiener filtering to restore noisy and blurred images. Operating rules represented with random sets are used to generate a nominal input-output system model, which is then used to select initial Wiener filter input parameters. Whenthe nominally optimal Wiener filter is applied to an observed image, the operator's assessment of output image quality is used in an adaptive filtering algorithm to adjust the model and select new input parameters. Test on several images have confirmed that with a few such iterations, a significant improvement in output quality is achieved.

  20. Multivariate image analysis for process monitoring and control

    NASA Astrophysics Data System (ADS)

    MacGregor, John F.; Bharati, Manish H.; Yu, Honglu

    2001-02-01

    Information from on-line imaging sensors has great potential for the monitoring and control of quality in spatially distributed systems. The major difficulty lies in the efficient extraction of information from the images, information such as the frequencies of occurrence of specific and often subtle features, and their locations in the product or process space. This paper presents an overview of multivariate image analysis methods based on Principal Component Analysis and Partial Least Squares for decomposing the highly correlated data present in multi-spectral images. The frequencies of occurrence of certain features in the image, regardless of their spatial locations, can be easily monitored in the space of the principal components. The spatial locations of these features can then be obtained by transposing highlighted pixels from the PC score space into the original image space. In this manner it is possible to easily detect and locate even very subtle features from on-line imaging sensors for the purpose of statistical process control or feedback control of spatial processes. The concepts and potential of the approach are illustrated using a sequence of LANDSAT satellite multispectral images, depicting a pass over a certain region of the earth's surface. Potential applications in industrial process monitoring using these methods will be discussed from a variety of areas such as pulp and paper sheet products, lumber and polymer films.