Science.gov

Sample records for 3d-ct imaging processing

  1. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  2. Computation of tooth axes of existent and missing teeth from 3D CT images.

    PubMed

    Wang, Yang; Wu, Lin; Guo, Huayan; Qiu, Tiantian; Huang, Yuanliang; Lin, Bin; Wang, Lisheng

    2015-12-01

    Orientations of tooth axes are important quantitative information used in dental diagnosis and surgery planning. However, their computation is a complex problem, and the existing methods have respective limitations. This paper proposes new methods to compute 3D tooth axes from 3D CT images for existent teeth with single root or multiple roots and to estimate 3D tooth axes from 3D CT images for missing teeth. The tooth axis of a single-root tooth will be determined by segmenting the pulp cavity of the tooth and computing the principal direction of the pulp cavity, and the estimation of tooth axes of the missing teeth is modeled as an interpolation problem of some quaternions along a 3D curve. The proposed methods can either avoid the difficult teeth segmentation problem or improve the limitations of existing methods. Their effectiveness and practicality are demonstrated by experimental results of different 3D CT images from the clinic.

  3. Method for extracting the aorta from 3D CT images

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2007-03-01

    Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately, many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta, causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line Model Construction is done once using several representative human MDCT images and consists of the following steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the following operations: construct a likelihood image, perform global fitting of the precomputed models to the current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta. The region recovery method consists of two steps: model-based and region-growing steps. This region growing method can recover regions outside the model coverage and non-circular tube structures. In our experiments, we used three models and achieved satisfactory results on twelve of thirteen test cases.

  4. Geometry-based vs. intensity-based medical image registration: A comparative study on 3D CT data.

    PubMed

    Savva, Antonis D; Economopoulos, Theodore L; Matsopoulos, George K

    2016-02-01

    Spatial alignment of Computed Tomography (CT) data sets is often required in numerous medical applications and it is usually achieved by applying conventional exhaustive registration techniques, which are mainly based on the intensity of the subject data sets. Those techniques consider the full range of data points composing the data, thus negatively affecting the required processing time. Alternatively, alignment can be performed using the correspondence of extracted data points from both sets. Moreover, various geometrical characteristics of those data points can be used, instead of their chromatic properties, for uniquely characterizing each point, by forming a specific geometrical descriptor. This paper presents a comparative study reviewing variations of geometry-based, descriptor-oriented registration techniques, as well as conventional, exhaustive, intensity-based methods for aligning three-dimensional (3D) CT data pairs. In this context, three general image registration frameworks were examined: a geometry-based methodology featuring three distinct geometrical descriptors, an intensity-based methodology using three different similarity metrics, as well as the commonly used Iterative Closest Point algorithm. All techniques were applied on a total of thirty 3D CT data pairs with both known and unknown initial spatial differences. After an extensive qualitative and quantitative assessment, it was concluded that the proposed geometry-based registration framework performed similarly to the examined exhaustive registration techniques. In addition, geometry-based methods dramatically improved processing time over conventional exhaustive registration.

  5. Segmentation of bone structures in 3D CT images based on continuous max-flow optimization

    NASA Astrophysics Data System (ADS)

    Pérez-Carrasco, J. A.; Acha-Piñero, B.; Serrano, C.

    2015-03-01

    In this paper an algorithm to carry out the automatic segmentation of bone structures in 3D CT images has been implemented. Automatic segmentation of bone structures is of special interest for radiologists and surgeons to analyze bone diseases or to plan some surgical interventions. This task is very complicated as bones usually present intensities overlapping with those of surrounding tissues. This overlapping is mainly due to the composition of bones and to the presence of some diseases such as Osteoarthritis, Osteoporosis, etc. Moreover, segmentation of bone structures is a very time-consuming task due to the 3D essence of the bones. Usually, this segmentation is implemented manually or with algorithms using simple techniques such as thresholding and thus providing bad results. In this paper gray information and 3D statistical information have been combined to be used as input to a continuous max-flow algorithm. Twenty CT images have been tested and different coefficients have been computed to assess the performance of our implementation. Dice and Sensitivity values above 0.91 and 0.97 respectively were obtained. A comparison with Level Sets and thresholding techniques has been carried out and our results outperformed them in terms of accuracy.

  6. Clinical applications of 2D and 3D CT imaging of the airways--a review.

    PubMed

    Salvolini, L; Bichi Secchi, E; Costarelli, L; De Nicola, M

    2000-04-01

    Hardware and software evolution has broadened the possibilities of 2D and 3D reformatting of spiral CT and MR data set. In the study of the thorax, intrinsic benefits of volumetric CT scanning and better quality of reconstructed images offer us the possibility to apply additional rendering techniques to everyday clinical practice. Considering the large number and redundancy of possible post-processing imaging techniques that we can apply to raw CT sections data, it is necessary to precisely set a well-defined number of clinical applications of each of them, by careful evaluation of their benefits and possible pitfalls in each clinical setting. In diagnostic evaluation of pathological processes affecting the airways, a huge number of thin sections is necessary for detailed appraisal and has to be evaluated, and information must then be transferred to referring clinicians. By additional rendering it is possible to make image evaluation and data transfer easier, faster, and more effective. In the study of central airways, additional rendering can be of interest for precise evaluation of the length, morphology, and degree of stenoses. It may help in depicting exactly the locoregional extent of central tumours by better display of relations with bronchovascular interfaces and can increase CT/bronchoscopy sinergy. It may allow closer radiotherapy planning and better depiction of air collections, and, finally, it could ease panoramic evaluation of the results of dynamic or functional studies, that are made possible by increased speed of spiral scanning. When applied to the evaluation of peripheral airways, as a completion to conventional HRCT scans, High-Resolution Volumetric CT, by projection slabs applied to target areas of interest, can better depict the profusion and extension of affected bronchial segments in bronchiectasis, influence the choice of different approaches for tissue sampling by better evaluation of the relations of lung nodules with the airways, or help

  7. Dynamic 2D ultrasound and 3D CT image registration of the beating heart.

    PubMed

    Huang, Xishi; Moore, John; Guiraudon, Gerard; Jones, Douglas L; Bainbridge, Daniel; Ren, Jing; Peters, Terry M

    2009-08-01

    Two-dimensional ultrasound (US) is widely used in minimally invasive cardiac procedures due to its convenience of use and noninvasive nature. However, the low quality of US images often limits their utility as a means for guiding procedures, since it is often difficult to relate the images to their anatomical context. To improve the interpretability of the US images while maintaining US as a flexible anatomical and functional real-time imaging modality, we describe a multimodality image navigation system that integrates 2D US images with their 3D context by registering them to high quality preoperative models based on magnetic resonance imaging (MRI) or computed tomography (CT) images. The mapping from such a model to the patient is completed using spatial and temporal registrations. Spatial registration is performed by a two-step rapid registration method that first approximately aligns the two images as a starting point to an automatic registration procedure. Temporal alignment is performed with the aid of electrocardiograph (ECG) signals and a latency compensation method. Registration accuracy is measured by calculating the TRE. Results show that the error between the US and preoperative images of a beating heart phantom is 1.7 +/-0.4 mm, with a similar performance being observed in in vivo animal experiments.

  8. Computer-aided diagnosis for osteoporosis using chest 3D CT images

    NASA Astrophysics Data System (ADS)

    Yoneda, K.; Matsuhiro, M.; Suzuki, H.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.

    2016-03-01

    The patients of osteoporosis comprised of about 13 million people in Japan and it is one of the problems the aging society has. In order to prevent the osteoporosis, it is necessary to do early detection and treatment. Multi-slice CT technology has been improving the three dimensional (3-D) image analysis with higher body axis resolution and shorter scan time. The 3-D image analysis using multi-slice CT images of thoracic vertebra can be used as a support to diagnose osteoporosis and at the same time can be used for lung cancer diagnosis which may lead to early detection. We develop automatic extraction and partitioning algorithm for spinal column by analyzing vertebral body structure, and the analysis algorithm of the vertebral body using shape analysis and a bone density measurement for the diagnosis of osteoporosis. Osteoporosis diagnosis support system obtained high extraction rate of the thoracic vertebral in both normal and low doses.

  9. Parametric modelling and segmentation of vertebral bodies in 3D CT and MR spine images

    NASA Astrophysics Data System (ADS)

    Štern, Darko; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2011-12-01

    Accurate and objective evaluation of vertebral deformations is of significant importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is focused on three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) imaging techniques, the established methods for evaluation of vertebral deformations are limited to measuring deformations in two-dimensional (2D) x-ray images. In this paper, we propose a method for quantitative description of vertebral body deformations by efficient modelling and segmentation of vertebral bodies in 3D. The deformations are evaluated from the parameters of a 3D superquadric model, which is initialized as an elliptical cylinder and then gradually deformed by introducing transformations that yield a more detailed representation of the vertebral body shape. After modelling the vertebral body shape with 25 clinically meaningful parameters and the vertebral body pose with six rigid body parameters, the 3D model is aligned to the observed vertebral body in the 3D image. The performance of the method was evaluated on 75 vertebrae from CT and 75 vertebrae from T2-weighted MR spine images, extracted from the thoracolumbar part of normal and pathological spines. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images, as the proposed 3D model is able to describe both normal and pathological vertebral body deformations. The method may therefore be used for initialization of whole vertebra segmentation or for quantitative measurement of vertebral body deformations.

  10. Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2008-03-01

    Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26 cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.

  11. Geodesic Distance Algorithm for Extracting the Ascending Aorta from 3D CT Images.

    PubMed

    Jang, Yeonggul; Jung, Ho Yub; Hong, Youngtaek; Cho, Iksung; Shim, Hackjoon; Chang, Hyuk-Jae

    2016-01-01

    This paper presents a method for the automatic 3D segmentation of the ascending aorta from coronary computed tomography angiography (CCTA). The segmentation is performed in three steps. First, the initial seed points are selected by minimizing a newly proposed energy function across the Hough circles. Second, the ascending aorta is segmented by geodesic distance transformation. Third, the seed points are effectively transferred through the next axial slice by a novel transfer function. Experiments are performed using a database composed of 10 patients' CCTA images. For the experiment, the ground truths are annotated manually on the axial image slices by a medical expert. A comparative evaluation with state-of-the-art commercial aorta segmentation algorithms shows that our approach is computationally more efficient and accurate under the DSC (Dice Similarity Coefficient) measurements. PMID:26904151

  12. Combining population and patient-specific characteristics for prostate segmentation on 3D CT images

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M.; Fei, Baowei

    2016-03-01

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.

  13. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images

    PubMed Central

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Tade, Funmilayo; Schuster, David M.; Fei, Baowei

    2016-01-01

    Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy. PMID:27660382

  14. Automated torso organ segmentation from 3D CT images using conditional random field

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2016-03-01

    This paper presents a segmentation method for torso organs using conditional random field (CRF) from medical images. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. In this paper, we propose an organ segmentation method using structured output learning which is based on probabilistic graphical model. The proposed method utilizes CRF on three-dimensional grids as probabilistic graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weight parameters of the CRF using stochastic gradient descent algorithm and estimate organ labels for a given image by maximum a posteriori (MAP) estimation. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 6.6%. The DICE coefficients of right lung, left lung, heart, liver, spleen, right kidney, and left kidney are 0.94, 0.92, 0.65, 0.67, 0.36, 0.38, and 0.37, respectively.

  15. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  16. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  17. 3D CT Imaging for Craniofacial Analysis Based on Anatomical Regions.

    PubMed

    Wan Harun, W A R; Ahmad Rajion, Zainul; Abdul Aziz, Izhar; Rani Samsudin, Abdul

    2005-01-01

    The development of a craniofacial database is a multidisciplinary initiative that will provide an important reference for community, security, social and medical applications. A method of landmark identifications and measurements in 3d on craniofacial patients is described. anatomical regions such as mandible, orbits, zygoma and maxilla are located, created and stored as templates of 3D CAD files for subsequent analysis. Data from these images were tested for accuracy and repeatability by comparing with direct measurements using caliper and CMM. The landmark points are reproducible in CAD system for further analysis. it was found that the approach provides a fast, accurate and efficient method for landmarks identification of the craniofacial areas in database development. PMID:17282309

  18. Estimation of vocal fold plane in 3D CT images for diagnosis of vocal fold abnormalities.

    PubMed

    Hewavitharanage, Sajini; Gubbi, Jayavardhana; Thyagarajan, Dominic; Lau, Ken; Palaniswami, Marimuthu

    2015-01-01

    Vocal folds are the key body structures that are responsible for phonation and regulating air movement into and out of lungs. Various vocal fold disorders may seriously impact the quality of life. When diagnosing vocal fold disorders, CT of the neck is the commonly used imaging method. However, vocal folds do not align with the normal axial plane of a neck and the plane containing vocal cords and arytenoids does vary during phonation. It is therefore important to generate an algorithm for detecting the actual plane containing vocal folds. In this paper, we propose a method to automatically estimate the vocal fold plane using vertebral column and anterior commissure localization. Gray-level thresholding, connected component analysis, rule based segmentation and unsupervised k-means clustering were used in the proposed algorithm. The anterior commissure segmentation method achieved an accuracy of 85%, a good estimate of the expert assessment. PMID:26736949

  19. Three-dimensional electronic unpacking of packed bags using 3-D CT images

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Crawford, Carl R.; Boyd, Douglas P.

    2009-02-01

    We present a 3-D electronic unpacking technique for airport security images based on volume rendering techniques developed for medical applications. Two electronic unpacking techniques are presented: (1) object-based unpacking and (2) unpacking by bag-slicing. Both techniques provide photo-realistic 3-D views of contents inside a packed bag with clearly marked threats. For the object-based unpacking, the 3-D objects within packed bags are unpacked (or isolated) though object selection tools that cut away undesired regions to isolates the 3-D object from the background clutter. With this selection tool, the operator is able to electronically unpack various 3-D objects and manipulate (rotate and zoom) the 3-D photo-realistic views for the immediate classification of the suspect object. The unpacking by bag-slicing technique places arbitrary cut planes to show the content beyond the cut plane that can be stepped forward or backward electronically. The methods may be used to reduce the need for manual unpacking of suitcases.

  20. 3D stereophotogrammetric image superimposition onto 3D CT scan images: the future of orthognathic surgery. A pilot study.

    PubMed

    Khambay, Balvinder; Nebel, Jean-Christophe; Bowman, Janet; Walker, Fraser; Hadley, Donald M; Ayoub, Ashraf

    2002-01-01

    The aim of this study was to register and assess the accuracy of the superimposition method of a 3-dimensional (3D) soft tissue stereophotogrammetric image (C3D image) and a 3D image of the underlying skeletal tissue acquired by 3D spiral computerized tomography (CT). The study was conducted on a model head, in which an intact human skull was embedded with an overlying latex mask that reproduced anatomic features of a human face. Ten artificial radiopaque landmarks were secured to the surface of the latex mask. A stereophotogrammetric image of the mask and a 3D spiral CT image of the model head were captured. The C3D image and the CT images were registered for superimposition by 3 different methods: Procrustes superimposition using artificial landmarks, Procrustes analysis using anatomic landmarks, and partial Procrustes analysis using anatomic landmarks and then registration completion by HICP (a modified Iterative Closest Point algorithm) using a specified region of both images. The results showed that Procrustes superimposition using the artificial landmarks produced an error of superimposition on the order of 10 mm. Procrustes analysis using anatomic landmarks produced an error in the order of 2 mm. Partial Procrustes analysis using anatomic landmarks followed by HICP produced a superimposition accuracy of between 1.25 and 1.5 mm. It was concluded that a stereophotogrammetric and a 3D spiral CT scan image can be superimposed with an accuracy of between 1.25 and 1.5 mm using partial Procrustes analysis based on anatomic landmarks and then registration completion by HICP.

  1. Analysis of the advantage of individual PTVs defined on axial 3D CT and 4D CT images for liver cancer.

    PubMed

    Li, Fengxiang; Li, Jianbin; Xing, Jun; Zhang, Yingjie; Fan, Tingyong; Xu, Min; Shang, Dongping; Liu, Tonghai; Song, Jinlong

    2012-11-08

    The purpose of this study was to compare positional and volumetric differences of planning target volumes (PTVs) defined on axial three dimensional CT (3D CT) and four dimensional CT (4D CT) for liver cancer. Fourteen patients with liver cancer underwent 3D CT and 4D CT simulation scans during free breathing. The tumor motion was measured by 4D CT. Three internal target volumes (ITVs) were produced based on the clinical target volume from 3DCT (CTV3D): i) A conventional ITV (ITVconv) was produced by adding 10 mm in CC direction and 5 mm in LR and and AP directions to CTV3D; ii) A specific ITV (ITVspec) was created using a specific margin in transaxial direction; iii) ITVvector was produced by adding an isotropic margin derived from the individual tumor motion vector. ITV4D was defined on the fusion of CTVs on all phases of 4D CT. PTVs were generated by adding a 5 mm setup margin to ITVs. The average centroid shifts between PTVs derived from 3DCT and PTV4D in left-right (LR), anterior-posterior (AP), and cranial-caudal (CC) directions were close to zero. Comparing PTV4D to PTVconv, PTVspec, and PTVvector resulted in a decrease in volume size by 33.18% ± 12.39%, 24.95% ± 13.01%, 48.08% ± 15.32%, respectively. The mean degree of inclusions (DI) of PTV4D in PTVconv, and PTV4D in PTVspec, and PTV4D in PTVvector was 0.98, 0.97, and 0.99, which showed no significant correlation to tumor motion vector (r = -0.470, 0.259, and 0.244; p = 0.090, 0.371, and 0.401). The mean DIs of PTVconv in PTV4D, PTVspec in PTV4D, and PTVvector in PTV4D was 0.66, 0.73, and 0.52. The size of individual PTV from 4D CT is significantly less than that of PTVs from 3DCT. The position of targets derived from axial 3DCT images scatters around the center of 4D targets randomly. Compared to conventional PTV, the use of 3D CT-based PTVs with individual margins cannot significantly reduce normal tissues being unnecessarily irradiated, but may contribute to reducing the risk of missing targets for

  2. Development of a 3D CT scanner using cone beam

    NASA Astrophysics Data System (ADS)

    Endo, Masahiro; Kamagata, Nozomu; Sato, Kazumasa; Hattori, Yuichi; Kobayashi, Shigeo; Mizuno, Shinichi; Jimbo, Masao; Kusakabe, Masahiro

    1995-05-01

    In order to acquire 3D data of high contrast objects such as bone, lung and vessels enhanced by contrast media for use in 3D image processing, we have developed a 3D CT-scanner using cone beam x ray. The 3D CT-scanner consists of a gantry and a patient couch. The gantry consists of an x-ray tube designed for cone beam CT and a large area two-dimensional detector mounted on a single frame and rotated around an object in 12 seconds. The large area detector consists of a fluorescent plate and a charge coupled device video camera. The size of detection area was 600 mm X 450 mm capable of covering the total chest. While an x-ray tube was rotated around an object, pulsed x ray was exposed 30 times a second and 360 projected images were collected in a 12 second scan. A 256 X 256 X 256 matrix image (1.25 mm X 1.25 mm X 1.25 mm voxel) was reconstructed by a high-speed reconstruction engine. Reconstruction time was approximately 6 minutes. Cylindrical water phantoms, anesthetized rabbits with or without contrast media, and a Japanese macaque were scanned with the 3D CT-scanner. The results seem promising because they show high spatial resolution in three directions, though there existed several point to be improved. Possible improvements are discussed.

  3. A novel method of removing artifacts because of metallic dental restorations in 3-D CT images of jaw bone.

    PubMed

    Sohmura, Taiji; Hojoh, Hirokazu; Kusumoto, Naoki; Nishida, Masahiko; Wakabayashi, Kazumichi; Takahashi, Junzo

    2005-12-01

    CT images, especially in a three-dimensional (3-D) mode, give valuable information for oral implant surgery. However, image quality is often severely compromised by artifacts originating from metallic dental restorations, and an effective solution for artifacts is being sought. This study attempts to substitute the damaged areas of the jaw bone images with dental cast model images obtained by CT. The position of the dental cast images was registered to that of the jaw bone images using a devised interface that is composed of an occlusal bite made of self-curing acrylic resin and a marker plate made of gypsum. The patient adapted this interface, and CT images of the stomatognathic system were filmed. On the other hand, this interface was placed between the upper and lower cast models and filmed by CT together with the cast models. The position of the marker plate imaged with the dental casts was registered to those adapted by the patient. The error of registration was examined to be 0.25 mm, which was satisfactory for clinical application. The damaged region in the cranial bone images as an obstacle for implant surgery was removed and substituted with the trimmed images of the dental cast. In the method developed here, the images around the metallic compounds severely damaged by artifacts were successfully reconstructed, and the stomatognathic system images became clear, and this is useful for implant surgery.

  4. Efficient and robust 3D CT image reconstruction based on total generalized variation regularization using the alternating direction method.

    PubMed

    Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang

    2015-01-01

    Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research.

  5. New 3D Bolton standards: coregistration of biplane x rays and 3D CT

    NASA Astrophysics Data System (ADS)

    Dean, David; Subramanyan, Krishna; Kim, Eun-Kyung

    1997-04-01

    The Bolton Standards 'normative' cohort (16 males, 16 females) have been invited back to the Bolton-Brush Growth Study Center for new biorthogonal plain film head x-rays and 3D (three dimensional) head CT-scans. A set of 29 3D landmarks were identified on both their biplane head film and 3D CT images. The current 3D CT image is then superimposed onto the landmarks collected from the current biplane head films. Three post-doctoral fellows have collected 37 3D landmarks from the Bolton Standards' 40 - 70 year old biplane head films. These films were captured annually during their growing period (ages 3 - 18). Using 29 of these landmarks the current 3D CT image is next warped (via thin plate spline) to landmarks taken from each participant's 18th year biplane head films, a process that is successively reiterated back to age 3. This process is demonstrated here for one of the Bolton Standards. The outer skull surfaces will be extracted from each warped 3D CT image and an average will be generated for each age/sex group. The resulting longitudinal series of average 'normative' boney skull surface images may be useful for craniofacial patient: diagnosis, treatment planning, stereotactic procedures, and outcomes assessment.

  6. The European Society of Therapeutic Radiology and Oncology-European Institute of Radiotherapy (ESTRO-EIR) report on 3D CT-based in-room image guidance systems: a practical and technical review and guide.

    PubMed

    Korreman, Stine; Rasch, Coen; McNair, Helen; Verellen, Dirk; Oelfke, Uwe; Maingon, Philippe; Mijnheer, Ben; Khoo, Vincent

    2010-02-01

    The past decade has provided many technological advances in radiotherapy. The European Institute of Radiotherapy (EIR) was established by the European Society of Therapeutic Radiology and Oncology (ESTRO) to provide current consensus statement with evidence-based and pragmatic guidelines on topics of practical relevance for radiation oncology. This report focuses primarily on 3D CT-based in-room image guidance (3DCT-IGRT) systems. It will provide an overview and current standing of 3DCT-IGRT systems addressing the rationale, objectives, principles, applications, and process pathways, both clinical and technical for treatment delivery and quality assurance. These are reviewed for four categories of solutions; kV CT and kV CBCT (cone-beam CT) as well as MV CT and MV CBCT. It will also provide a framework and checklist to consider the capability and functionality of these systems as well as the resources needed for implementation. Two different but typical clinical cases (tonsillar and prostate cancer) using 3DCT-IGRT are illustrated with workflow processes via feedback questionnaires from several large clinical centres currently utilizing these systems. The feedback from these clinical centres demonstrates a wide variability based on local practices. This report whilst comprehensive is not exhaustive as this area of development remains a very active field for research and development. However, it should serve as a practical guide and framework for all professional groups within the field, focussed on clinicians, physicists and radiation therapy technologists interested in IGRT.

  7. Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Helferty, James P.; Padfield, Dirk R.

    2003-05-01

    Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.

  8. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery

    PubMed Central

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  9. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery.

    PubMed

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds.

  10. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery.

    PubMed

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  11. Method and phantom to study combined effects of in-plane (x,y) and z-axis resolution for 3D CT imaging.

    PubMed

    Goodenough, David; Levy, Josh; Kristinsson, Smari; Fredriksson, Jesper; Olafsdottir, Hildur; Healy, Austin

    2016-09-08

    Increasingly, the advent of multislice CT scanners, volume CT scanners, and total body spiral acquisition modes has led to the use of Multi Planar Reconstruction and 3D datasets. In considering 3D resolution properties of a CT system it is important to note that both the in-plane (x,y) and z-axis (slice thickness) influence the visual-ization and detection of objects within the scanned volume. This study investigates ways to consider both the in-plane resolution and the z-axis resolution in a single phantom wherein analytic or visualized analysis can yield information on these combined effects. A new phantom called the "Wave Phantom" is developed that can be used to sample the 3D resolution properties of a CT image, including in-plane (x,y) and z-axis information. The key development in this Wave Phantom is the incorporation of a z-axis aspect of a more traditional step (bar) resolution gauge phantom. The phantom can be examined visually wherein a cutoff level may be seen; and/or the analytic analysis of the various characteristics of the waveform profile by including amplitude, frequency, and slope (rate of climb) of the peaks, can be extracted from the Wave Pattern using mathematical analysis such as the Fourier transform. The combined effect of changes in in-plane resolution and z-axis (thickness), are shown, as well as the effect of changes in either in-plane resolu-tion, or z-axis thickness. Examples of visual images of the Wave pattern as well as the analytic characteristics of the various harmonics of a periodic Wave pattern resulting from changes in resolution filter and/or slice thickness, and position in the field of view are shown. The Wave Phantom offers a promising way to investigate 3D resolution results from combined effect of in-plane (x-y) and z-axis resolution as contrasted to the use of simple 2D resolution gauges that need to be used with separate measures of z-axis dependency, such as angled ramps. It offers both a visual pattern as well as a

  12. Method and phantom to study combined effects of in-plane (x,y) and z-axis resolution for 3D CT imaging.

    PubMed

    Goodenough, David; Levy, Josh; Kristinsson, Smari; Fredriksson, Jesper; Olafsdottir, Hildur; Healy, Austin

    2016-01-01

    Increasingly, the advent of multislice CT scanners, volume CT scanners, and total body spiral acquisition modes has led to the use of Multi Planar Reconstruction and 3D datasets. In considering 3D resolution properties of a CT system it is important to note that both the in-plane (x,y) and z-axis (slice thickness) influence the visual-ization and detection of objects within the scanned volume. This study investigates ways to consider both the in-plane resolution and the z-axis resolution in a single phantom wherein analytic or visualized analysis can yield information on these combined effects. A new phantom called the "Wave Phantom" is developed that can be used to sample the 3D resolution properties of a CT image, including in-plane (x,y) and z-axis information. The key development in this Wave Phantom is the incorporation of a z-axis aspect of a more traditional step (bar) resolution gauge phantom. The phantom can be examined visually wherein a cutoff level may be seen; and/or the analytic analysis of the various characteristics of the waveform profile by including amplitude, frequency, and slope (rate of climb) of the peaks, can be extracted from the Wave Pattern using mathematical analysis such as the Fourier transform. The combined effect of changes in in-plane resolution and z-axis (thickness), are shown, as well as the effect of changes in either in-plane resolu-tion, or z-axis thickness. Examples of visual images of the Wave pattern as well as the analytic characteristics of the various harmonics of a periodic Wave pattern resulting from changes in resolution filter and/or slice thickness, and position in the field of view are shown. The Wave Phantom offers a promising way to investigate 3D resolution results from combined effect of in-plane (x-y) and z-axis resolution as contrasted to the use of simple 2D resolution gauges that need to be used with separate measures of z-axis dependency, such as angled ramps. It offers both a visual pattern as well as a

  13. Clinical evaluation of 3D-CT cholangiography for preoperative examination in laparoscopic cholecystectomy.

    PubMed

    Kinami, S; Yao, T; Kurachi, M; Ishizaki, Y

    1999-02-01

    Three-dimensional-computed tomography (3D-CT) cholangiography is a 3D shaded surface display image of the biliary tract obtained by using helical CT after intravenous cholangiography or cholangiography per percutaneous transhepatic cholangio-drainage tube. We investigated whether 3D-CT cholangiography could provide a useful image, for preoperative examination in laparoscopic cholecystectomy. Sixty-five patients with biliary diseases were examined by 3D-CT cholangiography. Helical scanning was performed on a Proceed Accell (GE Medical Systems, Waukesha, WI, USA). Three-dimensional images were created using an independent workstation. A clear image of the common bile duct was obtained for all patients (100%) by 3D-CT cholangiography. The gallbladder was well visualized in 54 (93%) and the cystic duct was shown to be opacified in 55 (95%) of the 58 patients with a gallbladder. Thirty-one patients were diagnosed as having gallstones by 3D-CT cholangiography (sensitivity. 72.1%; specificity, 100%; accuracy, 79.3%), while 43 were diagnosed as having cholecystolithiasis by ultrasonography. The advantages of 3D-CT cholangiography were a low level of invasiveness, easily obtained images compared to those obtained with endoscopic retrograde cholangiography (ERC), good opacification, and provision of a three-dimensional understanding of the biliary system, especially of the cystic duct. When combined with ultrasonography and routine liver function tests, 3D-CT cholangiography was considered very useful for obtaining information before laparoscopic cholecystectomy. It allowed the omission of ERC in many patients who were considered to have no common bile duct stone, by employment of 3D-CT cholangiography.

  14. Virtual bronchoscopic approach for combining 3D CT and endoscopic video

    NASA Astrophysics Data System (ADS)

    Sherbondy, Anthony J.; Kiraly, Atilla P.; Austin, Allen L.; Helferty, James P.; Wan, Shu-Yen; Turlington, Janice Z.; Yang, Tao; Zhang, Chao; Hoffman, Eric A.; McLennan, Geoffrey; Higgins, William E.

    2000-04-01

    To improve the care of lung-cancer patients, we are devising a diagnostic paradigm that ties together three-dimensional (3D) high-resolution computed-tomographic (CT) imaging and bronchoscopy. The system expands upon the new concept of virtual endoscopy that has seen recent application to the chest, colon, and other anatomical regions. Our approach applies computer-graphics and image-processing tools to the analysis of 3D CT chest images and complementary bronchoscopic video. It assumes a two-stage assessment of a lung-cancer patient. During Stage 1 (CT assessment), the physician interacts with a number of visual and quantitative tools to evaluate the patient's 'virtual anatomy' (3D CT scan). Automatic analysis gives navigation paths through major airways and to pre-selected suspect sites. These paths provide useful guidance during Stage-1 CT assessment. While interacting with these paths and other software tools, the user builds a multimedia Case Study, capturing telling snapshot views, movies, and quantitative data. The Case Study contains a report on the CT scan and also provides planning information for subsequent bronchoscopic evaluation. During Stage 2 (bronchoscopy), the physician uses (1) the original CT data, (2) software graphical tools, (3) the Case Study, and (4) a standard bronchoscopy suite to have an augmented vision for bronchoscopic assessment and treatment. To use the two data sources (CT and bronchoscopic video) simultaneously, they must be registered. We perform this registration using both manual interaction and an automated matching approach based on mutual information. We demonstrate our overall progress to date using human CT cases and CT-video from a bronchoscopy- training device.

  15. Role of 3D-CT for orthodontic and ENT evaluation in Goldenhar syndrome.

    PubMed

    Saccomanno, S; Greco, F; D'Alatri, L; De Corso, E; Pandolfini, M; Sergi, B; Pirronti, T; Deli, R

    2014-08-01

    Goldenhar syndrome is a congenital condition that includes anomalies of the derivatives of the first and second brachial arches, vertebral defects and ocular abnormalities. It is also known as oculo-auriculo-vertebrale syndrome (OAVS), hemifacial microsomia, or first or second brachial arch syndrome. It was first described by Van Duyse in 1882 and better studied by M. Goldenhar in 1952. Its treatment requires a multidisciplinary approach. Herein, we describe the value of 3D-CT evaluation in a patient with Goldenhar syndrome, with particular regard to planning diagnostic and therapeutic approach. A 7-year-old boy with Goldenhar syndrome with definite post-natal genetic diagnosis was referred to our Department of Radiology for neuroimaging of the temporal bone. By 3D-CT evaluation of this young patient we observed the asymmetry of the condyles with the right one dysmorphic, short and wide; the auricle of the right ear was replaced by a dysmorphic rough; the right middle ear had a hypoplastic tympanic cavity and the internal auditory canal of right ear was atresic. In our experience, 3D-CT is a powerful diagnostic instrument and offers many advantages: volumetric reproduction of cranium and soft tissues, no overlap of anatomic parts that limits the visibility of various structures, high precision and assurance of images, and a constant and easily reproducible reference system. In our case, 3D-CT offered a very complete evaluation of all malformations of mandibular and temporal bone that characterize this syndrome and representing an important step for ENT and orthodontic therapeutic approaches.

  16. Precision of cortical bone reconstruction based on 3D CT scans.

    PubMed

    Wang, Jianping; Ye, Ming; Liu, Zhongtang; Wang, Chengtao

    2009-04-01

    The precision and accuracy of human cortical bone reconstruction using 3D CT scans was evaluated using machined bone segments. Both linear and angular errors were measured. Cadaver adult femoral and tibial cortical bone segments were obtained and machined in six orthogonal planes with a precision milling machine. CT scans were then obtained and the bone segments were reconstructed as digital replicas. Dimensional and angular measurements errors were evaluated for the machined bone segments and the results were compared with known dimensions based on milling machine settings to calculate errors due to scanning and model reconstruction. The model dimensional error in the coronal, sagittal and axial directions had a mean of 0.21 mm, with standard a deviation of 0.12 mm and a maximum error of 0.47 mm. The mean percent error was 0.74% and the maximum percent error was 1.9%. The angular error of models in the coronal, sagittal and axial directions was calculated, yielding a mean of 0.47 degrees with a standard deviation of 0.37 degrees and a maximum of 1.33 degrees. The error in the cross-sectional axial direction had a mean of 0.54 mm with a maximum error of 0.83 mm, depending on the slice interval. The main error source was of the image processing, which was about 70% of the total error. We found that machining cortical bone segments prior to CT scanning is an effective method for accuracy evaluation of CT-based bone reconstruction. This method can provide a reference for assessing the sensitivity, reliability and accuracy of CT-based applications in the study of movement, finite element modeling, and prosthesis construction.

  17. Test of 3D CT reconstructions by EM + TV algorithm from undersampled data

    NASA Astrophysics Data System (ADS)

    Evseev, Ivan; Ahmann, Francielle; da Silva, Hamilton P.; Schelin, Hugo R.; Yevseyeva, Olga; Klock, Márgio C. L.

    2013-05-01

    Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry with alternative virtual phantoms. As in the original report, the 3D CT images of 128×128×128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.

  18. Test of 3D CT reconstructions by EM + TV algorithm from undersampled data

    SciTech Connect

    Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da

    2013-05-06

    Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry with alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.

  19. TIPS Placement in Swine, Guided by Electromagnetic Real-Time Needle Tip Localization Displayed on Previously Acquired 3-D CT

    SciTech Connect

    Solomon, Stephen B.; Magee, Carolyn; Acker, David E.; Venbrux, Anthony C.

    1999-09-15

    Purpose: To determine the feasibility of guiding a transjugular intrahepatic portosystemic shunt (TIPS) procedure with an electromagnetic real-time needle tip position sensor coupled to previously acquired 3-dimensional (3-D) computed tomography (CT) images. Methods: An electromagnetic position sensor was placed at the tip of a Colapinto needle. The real-time position and orientation of the needle tip was then displayed on previously acquired 3-D CT images which were registered with the five swine. Portal vein puncture was then attempted in all animals. Results: The computer calculated accuracy of the position sensor was on average 3 mm. Four of five portal vein punctures were successful. In the successes, only one or two attempts were necessary and success was achieved in minutes. Conclusion: A real-time position sensor attached to the tip of a Colapinto needle and coupled to previously acquired 3-D CT images may potentially aid in entering the portal vein during the TIPS procedure.

  20. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    SciTech Connect

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  1. Bayesian maximal paths for coronary artery segmentation from 3D CT angiograms.

    PubMed

    Lesage, David; Angelini, Elsa D; Bloch, Isabelle; Funka-Lea, Gareth

    2009-01-01

    We propose a recursive Bayesian model for the delineation of coronary arteries from 3D CT angiograms (cardiac CTA) and discuss the use of discrete minimal path techniques as an efficient optimization scheme for the propagation of model realizations on a discrete graph. Design issues such as the definition of a suitable accumulative metric are analyzed in the context of our probabilistic formulation. Our approach jointly optimizes the vascular centerline and associated radius on a 4D space+scale graph. It employs a simple heuristic scheme to dynamically limit scale-space exploration for increased computational performance. It incorporates prior knowledge on radius variations and derives the local data likelihood from a multiscale, oriented gradient flux-based feature. From minimal cost path techniques, it inherits practical properties such as computational efficiency and workflow versatility. We quantitatively evaluated a two-point interactive implementation on a large and varied cardiac CTA database. Additionally, results from the Rotterdam Coronary Artery Algorithm Evaluation Framework are provided for comparison with existing techniques. The scores obtained are excellent (97.5% average overlap with ground truth delineated by experts) and demonstrate the high potential of the method in terms of robustness to anomalies and poor image quality.

  2. Use of 3D CT-based navigation in minimally invasive lateral lumbar interbody fusion.

    PubMed

    Joseph, Jacob R; Smith, Brandon W; Patel, Rakesh D; Park, Paul

    2016-09-01

    OBJECTIVE Lateral lumbar interbody fusion (LLIF) is an increasingly popular technique used to treat degenerative lumbar disease. The technique of using an intraoperative cone-beam CT (iCBCT) and an image-guided navigation system (IGNS) for LLIF cage placement has been previously described. However, other than a small feasibility study, there has been no clinical study evaluating its accuracy or safety. Therefore, the purpose of this study was to evaluate the accuracy and safety of image-guided spinal navigation in LLIF. METHODS An analysis of a prospectively acquired database was performed. Thirty-one consecutive patients were identified. Accuracy was initially determined by comparison of the planned trajectory of the IGNS with post-cage placement intraoperative fluoroscopy. Accuracy was subsequently confirmed by postprocedural CT and/or radiography. Cage placement was graded based on a previously described system separating the disc space into quarters. RESULTS The mean patient age was 63.9 years. A total of 66 spinal levels were treated, with a mean of 2.1 levels (range 1-4) treated per patient. Cage placement was noted to be accurate using IGNS in each case, as confirmed with intraoperative fluoroscopy and postoperative imaging. Sixty-four (97%) cages were placed within Quarters 1 to 2 or 2 to 3, indicating placement of the cage in the anterior or middle portions of the disc space. There were no instances of misguidance by IGNS. There was 1 significant approach-related complication (psoas muscle abscess) that required intervention, and 8 patients with transient, mild thigh paresthesias or weakness. CONCLUSIONS LLIF can be safely and accurately performed utilizing iCBCT and IGNS. Accuracy is acceptable for multilevel procedures. PMID:27104283

  3. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  4. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes.

    PubMed

    Eapen, Maya; Korah, Reeba; Geetha, G

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  5. Acceleration of EM-Based 3D CT Reconstruction Using FPGA.

    PubMed

    Choi, Young-Kyu; Cong, Jason

    2016-06-01

    Reducing radiation doses is one of the key concerns in computed tomography (CT) based 3D reconstruction. Although iterative methods such as the expectation maximization (EM) algorithm can be used to address this issue, applying this algorithm to practice is difficult due to the long execution time. Our goal is to decrease this long execution time to an order of a few minutes, so that low-dose 3D reconstruction can be performed even in time-critical events. In this paper we introduce a novel parallel scheme that takes advantage of numerous block RAMs on field-programmable gate arrays (FPGAs). Also, an external memory bandwidth reduction strategy is presented to reuse both the sinogram and the voxel intensity. Moreover, a customized processing engine based on the FPGA is presented to increase overall throughput while reducing the logic consumption. Finally, a hardware and software flow is proposed to quickly construct a design for various CT machines. The complete reconstruction system is implemented on an FPGA-based server-class node. Experiments on actual patient data show that a 26.9 × speedup can be achieved over a 16-thread multicore CPU implementation.

  6. Acceleration of EM-Based 3D CT Reconstruction Using FPGA.

    PubMed

    Choi, Young-Kyu; Cong, Jason

    2016-06-01

    Reducing radiation doses is one of the key concerns in computed tomography (CT) based 3D reconstruction. Although iterative methods such as the expectation maximization (EM) algorithm can be used to address this issue, applying this algorithm to practice is difficult due to the long execution time. Our goal is to decrease this long execution time to an order of a few minutes, so that low-dose 3D reconstruction can be performed even in time-critical events. In this paper we introduce a novel parallel scheme that takes advantage of numerous block RAMs on field-programmable gate arrays (FPGAs). Also, an external memory bandwidth reduction strategy is presented to reuse both the sinogram and the voxel intensity. Moreover, a customized processing engine based on the FPGA is presented to increase overall throughput while reducing the logic consumption. Finally, a hardware and software flow is proposed to quickly construct a design for various CT machines. The complete reconstruction system is implemented on an FPGA-based server-class node. Experiments on actual patient data show that a 26.9 × speedup can be achieved over a 16-thread multicore CPU implementation. PMID:26462240

  7. Comparison of the Reliability of Anatomic Landmarks based on PA Cephalometric Radiographs and 3D CT Scans in Patients with Facial Asymmetry

    PubMed Central

    Rathee, Pooja; Jain, Pradeep; Panwar, Vasim Raja

    2011-01-01

    Introduction Conventional cephalometry is an inexpensive and well-established method for evaluating patients with dentofacial deformities. However, patients with major deformities and in particular asymmetric cases are difficult to evaluate by conventional cephalometry. Reliable and accurate evaluation in the orbital and midfacial region in craniofacial syndrome patients is difficult due to inherent geometric magnification, distortion and the superpositioning of the craniofacial structures on cephalograms. Both two- and three-dimensional computed tomography (CT) have been proposed to alleviate some of these difficulties. Aims and objectives The aim of our study is to compare the reliability of anatomic cephalometric points obtained from the two modalities: Conventional posteroanterior cephalograms and 3D CT of patients with facial asymmetry, by comparison of intra- and interobserver variation of points recorded from frontal X-ray to those recorded from 3D CT. Materials and methods The sample included nine patients (5 males and 4 females) with an age range of 14 to 21 years and a mean age of 17.11 years, whose treatment plan called for correction of facial asymmetry. All CT scans were measured twice by two investigators with 2 weeks separation for determination of intraobserver and interobserver variability. Similarly, all measurement points on the frontal cephalograms were traced twice with 2 weeks separation. The tracings were superimposed and the average distance between replicate points readings were used as a measure of intra- and interobserver reliability. Intra-and interobserver variations are calculated for each method and the data were imported directly into the statistical program, SPSS 10.0.1 for windows. Results Intraobserver variations of points defined on 3D CT were small compared with frontal cephalograms. The intraobserver variations ranged from 0 (A1, B1) to 0.6 mm with the variations less than 0.5 mm for most of the points. Interobserver variations

  8. Mapping motion from 4D-MRI to 3D-CT for use in 4D dose calculations: A technical feasibility study

    SciTech Connect

    Boye, Dirk; Lomax, Tony; Knopf, Antje

    2013-06-15

    Purpose: Target sites affected by organ motion require a time resolved (4D) dose calculation. Typical 4D dose calculations use 4D-CT as a basis. Unfortunately, 4D-CT images have the disadvantage of being a 'snap-shot' of the motion during acquisition and of assuming regularity of breathing. In addition, 4D-CT acquisitions involve a substantial additional dose burden to the patient making many, repeated 4D-CT acquisitions undesirable. Here the authors test the feasibility of an alternative approach to generate patient specific 4D-CT data sets. Methods: In this approach motion information is extracted from 4D-MRI. Simulated 4D-CT data sets [which the authors call 4D-CT(MRI)] are created by warping extracted deformation fields to a static 3D-CT data set. The employment of 4D-MRI sequences for this has the advantage that no assumptions on breathing regularity are made, irregularities in breathing can be studied and, if necessary, many repeat imaging studies (and consequently simulated 4D-CT data sets) can be performed on patients and/or volunteers. The accuracy of 4D-CT(MRI)s has been validated by 4D proton dose calculations. Our 4D dose algorithm takes into account displacements as well as deformations on the originating 4D-CT/4D-CT(MRI) by calculating the dose of each pencil beam based on an individual time stamp of when that pencil beam is applied. According to corresponding displacement and density-variation-maps the position and the water equivalent range of the dose grid points is adjusted at each time instance. Results: 4D dose distributions, using 4D-CT(MRI) data sets as input were compared to results based on a reference conventional 4D-CT data set capturing similar motion characteristics. Almost identical 4D dose distributions could be achieved, even though scanned proton beams are very sensitive to small differences in the patient geometry. In addition, 4D dose calculations have been performed on the same patient, but using 4D-CT(MRI) data sets based on

  9. [3D interactive clipping technology in medical image processing].

    PubMed

    Sun, Shaoping; Yang, Kaitai; Li, Bin; Li, Yuanjun; Liang, Jing

    2013-09-01

    The aim of this paper is to study the methods of 3D visualization and the 3D interactive clipping of CT/MRI image sequence in arbitrary orientation based on the Visualization Toolkit (VTK). A new method for 3D CT/MRI reconstructed image clipping is presented, which can clip 3D object and 3D space of medical image sequence to observe the inner structure using 3D widget for manipulating an infinite plane. Experiment results show that the proposed method can implement 3D interactive clipping of medical image effectively and get satisfied results with good quality in short time.

  10. Stress Analysis of a Class II MO-Restored Tooth Using a 3D CT-Based Finite Element Model

    PubMed Central

    Chan, Yiu Pong; Tang, Chak Yin; Gao, Bo

    2012-01-01

    A computational method has been developed for stress analysis of a restored tooth so that experimental effort can be minimized. The objectives of this study include (i) developing a method to create a 3D FE assembly model for a restored tooth based on CT images and (ii) conducting stress analysis of the restored tooth using the 3D FE model established. To build up a solid computational model of a tooth, a method has been proposed to construct a 3D model from 2D CT-scanned images. Facilitated with CAD tools, the 3D tooth model has been virtually incorporated with a Class II MO restoration. The tooth model is triphasic, including the enamel, dentin, and pulp phases. To mimic the natural constraint on the movement of the tooth model, its corresponding mandible model has also been generated. The relative high maximum principal stress values were computed at the surface under loading and in the marginal region of the interface between the restoration and the tooth phases. PMID:22844287

  11. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  12. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  13. Hip dysplasia, pelvic obliquity, and scoliosis in cerebral palsy: a qualitative analysis using 3D CT reconstruction

    NASA Astrophysics Data System (ADS)

    Russ, Mark D.; Abel, Mark F.

    1998-06-01

    Five patients with cerebral palsy, hip dysplasia, pelvic obliquity, and scoliosis were evaluated retrospectively using three dimensional computed tomography (3DCT) scans of the proximal femur, pelvis, and lumbar spine to qualitatively evaluate their individual deformities by measuring a number of anatomical landmarks. Three dimensional reconstructions of the data were visualized, analyzed, and then manipulated interactively to perform simulated osteotomies of the proximal femur and pelvis to achieve surgical correction of the hip dysplasia. Severe deformity can occur in spastic cerebral palsy, with serious consequences for the quality of life of the affected individuals and their families. Controversy exists regarding the type, timing and efficacy of surgical intervention for correction of hip dysplasia in this population. Other authors have suggested 3DCT studies are required to accurately analyze acetabular deficiency, and that this data allows for more accurate planning of reconstructive surgery. It is suggested here that interactive manipulation of the data to simulate the proposed surgery is a clinically useful extension of the analysis process and should also be considered as an essential part of the pre-operative planning to assure that the appropriate procedure is chosen. The surgical simulation may reduce operative time and improve surgical correction of the deformity.

  14. Biomedical image processing

    SciTech Connect

    Huang, H.K.

    1981-01-01

    Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed.

  15. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  16. Hyperspectral image processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  17. Subroutines For Image Processing

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

    1988-01-01

    Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

  18. Medical image processing system

    NASA Astrophysics Data System (ADS)

    Wang, Dezong; Wang, Jinxiang

    1994-12-01

    In this paper a medical image processing system is described. That system is named NAI200 Medical Image Processing System and has been appraised by Chinese Government. Principles and cases provided here. Many kinds of pictures are used in modern medical diagnoses, for example B-supersonic, X-ray, CT and MRI. Some times the pictures are not good enough for diagnoses. The noises interfere with real situation on these pictures. That means the image processing is needed. A medical image processing system is described in this paper. That system is named NAI200 Medical Image Processing System and has been appraised by Chinese Government. There are four functions in that system. The first part is image processing. More than thirty four programs are involved. The second part is calculating. The areas or volumes of single or multitissues are calculated. Three dimensional reconstruction is the third part. The stereo images of organs or tumors are reconstructed with cross-sections. The last part is image storage. All pictures can be transformed to digital images, then be stored in hard disk or soft disk. In this paper not only all functions of that system are introduced, also the basic principles of these functions are explained in detail. This system has been applied in hospitals. The images of hundreds of cases have been processed. We describe the functions combining real cases. Here we only introduce a few examples.

  19. Image processing in medicine

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans

    2001-12-01

    This article is divided into two parts: the first is an opinion, the second is a description. The opinion is that diagnostic medical imaging is not a detection problem. The description is of a specific medical image-processing program. Why the opinion? If medical imaging were a detection problem, then image processing would unimportant. However, image processing is crucial. We illustrate this fact using three examples ultrasound, magnetic resonance imaging and, most poignantly, computed radiography. Although the examples are anecdotal they are illustrative. The description is of the image-processing program ImprocRAD written by one of the authors (Dallas). First we will discuss the motivation for creating yet another image processing program including system characterization which is an area of expertise of one of the authors (Roehrig). We will then look at the structure of the program and finally, to the point, the specific application: mammographic diagnostic reading. We will mention rapid display of mammogram image sets and then discuss processing. In that context, we describe a real-time image-processing tool we term the MammoGlass.

  20. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  1. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  2. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  3. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  4. BAOlab: Image processing program

    NASA Astrophysics Data System (ADS)

    Larsen, Søren S.

    2014-03-01

    BAOlab is an image processing package written in C that should run on nearly any UNIX system with just the standard C libraries. It reads and writes images in standard FITS format; 16- and 32-bit integer as well as 32-bit floating-point formats are supported. Multi-extension FITS files are currently not supported. Among its tools are ishape for size measurements of compact sources, mksynth for generating synthetic images consisting of a background signal including Poisson noise and a number of pointlike sources, imconvol for convolving two images (a “source” and a “kernel”) with each other using fast fourier transforms (FFTs) and storing the output as a new image, and kfit2d for fitting a two-dimensional King model to an image.

  5. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  6. Image processing occupancy sensor

    DOEpatents

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  7. Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images

    NASA Astrophysics Data System (ADS)

    Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2006-03-01

    We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.

  8. Programmable Image Processing Element

    NASA Astrophysics Data System (ADS)

    Eversole, W. L.; Salzman, J. F.; Taylor, F. V.; Harland, W. L.

    1982-07-01

    The algorithmic solution to many image-processing problems frequently uses sums of products where each multiplicand is an input sample (pixel) and each multiplier is a stored coefficient. This paper presents a large-scale integrated circuit (LSIC) implementation that provides accumulation of nine products and discusses its evolution from design through application 'A read-only memory (ROM) accumulate algorithm is used to perform the multiplications and is the key to one-chip implementation. The ROM function is actually implemented with erasable programmable ROM (EPROM) to allow reprogramming of the circuit to a variety of different functions. A real-time brassboard is being constructed to demonstrate four different image-processing operations on TV images.

  9. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  10. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  11. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  12. Digital image processing.

    PubMed

    Lo, Winnie Y; Puchalski, Sarah M

    2008-01-01

    Image processing or digital image manipulation is one of the greatest advantages of digital radiography (DR). Preprocessing depends on the modality and corrects for system irregularities such as differential light detection efficiency, dead pixels, or dark noise. Processing is manipulation of the raw data just after acquisition. It is generally proprietary and specific to the DR vendor but encompasses manipulations such as unsharp mask filtering within two or more spatial frequency bands, histogram sliding and stretching, and gray scale rendition or lookup table application. These processing steps have a profound effect on the final appearance of the radiograph, but they can also lead to artifacts unique to digital systems. Postprocessing refers to manipulation of the final appearance of the radiograph by the end-user and does not involve alteration of the raw data.

  13. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  14. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  15. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  16. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  17. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  18. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  19. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  20. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  1. ASPIC: STARLINK image processing package

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.

    2015-10-01

    ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

  2. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  3. Filter for biomedical imaging and image processing

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.; Ahmad, Imteyaz

    2006-07-01

    Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

  4. Image processing in digital radiography.

    PubMed

    Freedman, M T; Artz, D S

    1997-01-01

    Image processing is a critical part of obtaining high-quality digital radiographs. Fortunately, the user of these systems does not need to understand image processing in detail, because the manufacturers provide good starting values. Because radiologists may have different preferences in image appearance, it is helpful to know that many aspects of image appearance can be changed by image processing, and a new preferred setting can be loaded into the computer and saved so that it can become the new standard processing method. Image processing allows one to change the overall optical density of an image and to change its contrast. Spatial frequency processing allows an image to be sharpened, improving its appearance. It also allows noise to be blurred so that it is less visible. Care is necessary to avoid the introduction of artifacts or the hiding of mediastinal tubes.

  5. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  6. Estimation of three-dimensional knee joint movement using bi-plane x-ray fluoroscopy and 3D-CT

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Fujita, Satoshi; Kohno, Takahiro; Suzuki, Masahiko; Miyagi, Jin; Moriya, Hideshige

    2005-04-01

    Acquisition of exact information of three-dimensional knee joint movement is desired in plastic surgery. Conventional X-ray fluoroscopy provides dynamic but just two-dimensional projected image. On the other hand, three-dimensional CT provides three-dimensional but just static image. In this paper, a method for acquiring three-dimensional knee joint movement using both bi-plane, dynamic X-ray fluoroscopy and static three-dimensional CT is proposed. Basic idea is use of 2D/3D registration using digitally reconstructed radiograph (DRR) or virtual projection of CT data. Original ideal is not new but the application of bi-plane fluoroscopy to natural bones of knee is reported for the first time. The technique was applied to two volunteers and successful results were obtained. Accuracy evaluation through computer simulation and phantom experiment with a knee joint of a pig were also conducted.

  7. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  8. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  9. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  10. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  11. Update on three-dimensional image reconstruction for preoperative simulation in thoracic surgery

    PubMed Central

    Chen-Yoshikawa, Toyofumi F.

    2016-01-01

    Background Three-dimensional computed tomography (3D-CT) technologies have been developed and refined over time. Recently, high-speed and high-quality 3D-CT technologies have also been introduced to the field of thoracic surgery. The purpose of this manuscript is to demonstrate several examples of these 3D-CT technologies in various scenarios in thoracic surgery. Methods A newly-developed high-speed and high-quality 3D image analysis software system was used in Kyoto University Hospital. Simulation and/or navigation were performed using this 3D-CT technology in various thoracic surgeries. Results Preoperative 3D-CT simulation was performed in most patients undergoing video-assisted thoracoscopic surgery (VATS). Anatomical variation was frequently detected preoperatively, which was useful in performing VATS procedures when using only a monitor for vision. In sublobar resection, 3D-CT simulation was more helpful. In small lung lesions, which were supposedly neither visible nor palpable, preoperative marking of the lesions was performed using 3D-CT simulation, and wedge resection or segmentectomy was successfully performed with confidence. This technique also enabled virtual-reality endobronchial ultrasonography (EBUS), which made the procedure more safe and reliable. Furthermore, in living-donor lobar lung transplantation (LDLLT), surgical procedures for donor lobectomy were simulated preoperatively by 3D-CT angiography, which also affected surgical procedures for recipient surgery. New surgical techniques such as right and left inverted LDLLT were also established using 3D models created with this technique. Conclusions After the introduction of 3D-CT technology to the field of thoracic surgery, preoperative simulation has been developed for various thoracic procedures. In the near future, this technique will become more common in thoracic surgery, and frequent use by thoracic surgeons will be seen in worldwide daily practice. PMID:27014477

  12. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  13. An image processing algorithm for PPCR imaging

    NASA Astrophysics Data System (ADS)

    Cowen, Arnold R.; Giles, Anthony; Davies, Andrew G.; Workman, A.

    1993-09-01

    During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.

  14. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification

  15. Acoustic image-processing software

    NASA Astrophysics Data System (ADS)

    Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.

  16. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  17. Signal and Image Processing Operations

    1995-05-10

    VIEW is a software system for processing arbitrary multidimensional signals. It provides facilities for numerical operations, signal displays, and signal databasing. The major emphasis of the system is on the processing of time-sequences and multidimensional images. The system is designed to be both portable and extensible. It runs currently on UNIX systems, primarily SUN workstations.

  18. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  19. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  20. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  1. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  2. Seismic Imaging Processing and Migration

    2000-06-26

    Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore » code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less

  3. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  4. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  5. Linear algebra and image processing

    NASA Astrophysics Data System (ADS)

    Allali, Mohamed

    2010-09-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty.

  6. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  7. Image processing applications in NDE

    SciTech Connect

    Morris, R.A.

    1980-01-01

    Nondestructive examination (NDE) can be defined as a technique or collection of techniques that permits one to determine some property of a material or object without damaging the object. There are a large number of such techniques and most of them use visual imaging in one form or another. They vary from holographic interferometry where displacements under stress are measured to the visual inspection of an objects surface to detect cracks after penetrant has been applied. The use of image processing techniques on the images produced by NDE is relatively new and can be divided into three general categories: classical image enhancement; mensuration techniques; and quantitative sensitometry. An example is discussed of how image processing techniques are used to nondestructively and destructively test the product throughout its life cycle. The product that will be followed is the microballoon target used in the laser fusion program. The laser target is a small (50 to 100 ..mu..m - dia) glass sphere with typical wall thickness of 0.5 to 6 ..mu..m. The sphere may be used as is or may be given a number of coatings of any number of materials. The beads are mass produced by the millions and the first nondestructive test is to separate the obviously bad beads (broken or incomplete) from the good ones. After this has been done, the good beads must be inspected for spherocity and wall thickness uniformity. The microradiography of the glass, uncoated bead is performed on a specially designed low-energy x-ray machine. The beads are mounted in a special jig and placed on a Kodak high resolution plate in a vacuum chamber that contains the x-ray source. The x-ray image is made with an energy less that 2 keV and the resulting images are then inspected at a magnification of 500 to 1000X. Some typical results are presented.

  8. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  9. Recent progress in 3-D imaging of sea freight containers

    NASA Astrophysics Data System (ADS)

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  10. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  11. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  12. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  13. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  14. Automated segmentation of knee and ankle regions of rats from CT images to quantify bone mineral density for monitoring treatments of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Cruz, Francisco; Sevilla, Raquel; Zhu, Joe; Vanko, Amy; Lee, Jung Hoon; Dogdas, Belma; Zhang, Weisheng

    2014-03-01

    Bone mineral density (BMD) obtained from a CT image is an imaging biomarker used pre-clinically for characterizing the Rheumatoid arthritis (RA) phenotype. We use this biomarker in animal studies for evaluating disease progression and for testing various compounds. In the current setting, BMD measurements are obtained manually by selecting the regions of interest from three-dimensional (3-D) CT images of rat legs, which results in a laborious and low-throughput process. Combining image processing techniques, such as intensity thresholding and skeletonization, with mathematical techniques in curve fitting and curvature calculations, we developed an algorithm for quick, consistent, and automatic detection of joints in large CT data sets. The implemented algorithm has reduced analysis time for a study with 200 CT images from 10 days to 3 days and has improved the robust detection of the obtained regions of interest compared with manual segmentation. This algorithm has been used successfully in over 40 studies.

  15. Applications Of Image Processing In Criminalistics

    NASA Astrophysics Data System (ADS)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  16. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  17. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  18. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  19. Image processing applied to laser cladding process

    SciTech Connect

    Meriaudeau, F.; Truchetet, F.

    1996-12-31

    The laser cladding process, which consists of adding a melt powder to a substrate in order to improve or change the behavior of the material against corrosion, fatigue and so on, involves a lot of parameters. In order to perform good tracks some parameters need to be controlled during the process. The authors present here a low cost performance system using two CCD matrix cameras. One camera provides surface temperature measurements while the other gives information relative to the powder distribution or geometric characteristics of the tracks. The surface temperature (thanks to Beer Lambert`s law) enables one to detect variations in the mass feed rate. Using such a system the authors are able to detect fluctuation of 2 to 3g/min in the mass flow rate. The other camera gives them information related to the powder distribution, a simple algorithm applied to the data acquired from the CCD matrix camera allows them to see very weak fluctuations within both gaz flux (carriage or protection gaz). During the process, this camera is also used to perform geometric measurements. The height and the width of the track are obtained in real time and enable the operator to find information related to the process parameters such as the speed processing, the mass flow rate. The authors display the result provided by their system in order to enhance the efficiency of the laser cladding process. The conclusion is dedicated to a summary of the presented works and the expectations for the future.

  20. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  1. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  2. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  3. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  4. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  5. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  6. Image processing for drawing recognition

    NASA Astrophysics Data System (ADS)

    Feyzkhanov, Rustem; Zhelavskaya, Irina

    2014-03-01

    The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.

  7. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  8. Parallel digital signal processing architectures for image processing

    NASA Astrophysics Data System (ADS)

    Kshirsagar, Shirish P.; Hartley, David A.; Harvey, David M.; Hobson, Clifford A.

    1994-10-01

    This paper describes research into a high speed image processing system using parallel digital signal processors for the processing of electro-optic images. The objective of the system is to reduce the processing time of non-contact type inspection problems including industrial and medical applications. A single processor can not deliver sufficient processing power required for the use of applications hence, a MIMD system is designed and constructed to enable fast processing of electro-optic images. The Texas Instruments TMS320C40 digital signal processor is used due to its high speed floating point CPU and the support for the parallel processing environment. A custom designed VISION bus is provided to transfer images between processors. The system is being applied for solder joint inspection of high technology printed circuit boards.

  9. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  10. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  11. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  12. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  13. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  14. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  15. Image processing technique based on image understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-12-01

    Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.

  16. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results. PMID:15062948

  17. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  18. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  19. Corn tassel detection based on image processing

    NASA Astrophysics Data System (ADS)

    Tang, Wenbing; Zhang, Yane; Zhang, Dongxing; Yang, Wei; Li, Minzan

    2012-01-01

    Machine vision has been widely applied in facility agriculture, and played an important role in obtaining environment information. In this paper, it is studied that application of image processing to recognize and locate corn tassel for corn detasseling machine. The corn tassel identification and location method was studied based on image processing and automated technology guidance information was provided for the actual production of corn emasculation operation. The system is the application of image processing to recognize and locate corn tassel for corn detasseling machine. According to the color characteristic of corn tassel, image processing techniques was applied to identify corn tassel of the images under HSI color space and Image segmentation was applied to extract the part of corn tassel, the feature of corn tassel was analyzed and extracted. Firstly, a series of preprocessing procedures were done. Then, an image segmentation algorithm based on HSI color space was develop to extract corn tassel from background and region growing method was proposed to recognize the corn tassel. The results show that this method could be effective for extracting corn tassel parts from the collected picture and can be used for corn tassel location information; this result could provide theoretical basis guidance for corn intelligent detasseling machine.

  20. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  1. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  2. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  3. Palm print image processing with PCNN

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhao, Xianhong

    2010-08-01

    Pulse coupled neural networks (PCNN) is based on Eckhorn's model of cat visual cortex, and imitate mammals visual processing, and palm print has been found as a personal biological feature for a long history. This inspired us with the combination of them: a novel method for palm print processing is proposed, which includes pre-processing and feature extraction of palm print image using PCNN; then the feature of palm print image is used for identifying. Our experiment shows that a verification rate of 87.5% can be achieved at ideal condition. We also find that the verification rate decreases duo to rotate or shift of palm.

  4. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  5. Transaction recording in medical image processing

    NASA Astrophysics Data System (ADS)

    Riedel, Christian H.; Ploeger, Andreas; Onnasch, Dietrich G. W.; Mehdorn, Hubertus M.

    1999-07-01

    In medical image processing original image data on archive servers may absolutely not be modified directly. On the other hand, images from read-only devices like CD-ROM cannot be changed and saved on the same storage medium. In both cases the modified data have to be stored as a second version and large amounts of storage volume are needed. We avoid these problems by using a program which records only each transaction prescribed to images. Each transaction is stored and used for further utilization and for renewed submission of the modified data. Conventionally, every time an image is viewed or printed, the modified version has to be saved in addition to the recorded data, either automatically or by the user. Compared to these approaches which not only squander storage space but area also time consuming our program has the following and advantages: First, the original image data which may not be modified are protected against manipulation. Second, small amounts of storage volume and network range are needed. Third, approved image operations can be automated by macros derived from transaction recordings. Finally, operations on the original data can always be controlled and traced back. As the handling of images gets easier with this concept, security for original image data is granted.

  6. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  7. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  8. Fundamental concepts of digital image processing

    SciTech Connect

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  9. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  10. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  11. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  12. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  13. Image gathering and processing - Information and fidelity

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.

    1985-01-01

    In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.

  14. Image processing for the Arcetri Solar Archive

    NASA Astrophysics Data System (ADS)

    Centrone, M.; Ermolli, I.; Giorgi, F.

    The modelling recently developed to "reconstruct" with high accuracy the measured Total Solar Irradiance (TSI) variations, based on semi-empirical atmosphere models and observed distribution of the solar magnetic regions, can be applied to "construct" the TSI variations back in time making use of observations stored on several historic photographic archives. However, the analyses of images obtained through these archives is not a straightforward task, because these images suffer of several defects originated by the acquisition techniques and the data storing. In this paper we summarize the processing applied to identify solar features on the images obtained by the digitization of the Arcetri solar archive.

  15. CCD architecture for spacecraft SAR image processing

    NASA Technical Reports Server (NTRS)

    Arens, W. E.

    1977-01-01

    A real-time synthetic aperture radar (SAR) image processing architecture amenable to future on-board spacecraft applications is currently under development. Using state-of-the-art charge-coupled device (CCD) technology, low cost and power are inherent features. Other characteristics include the ability to reprogram correlation reference functions, correct for range migration, and compensate for antenna beam pointing errors on the spacecraft in real time. The first spaceborne demonstration is scheduled to be flown as an experiment on a 1982 Shuttle imaging radar mission (SIR-B). This paper describes the architecture and implementation characteristics of this initial spaceborne CCD SAR image processor.

  16. Industrial Holography Combined With Image Processing

    NASA Astrophysics Data System (ADS)

    Schorner, J.; Rottenkolber, H.; Roid, W.; Hinsch, K.

    1988-01-01

    Holographic test methods have gained to become a valuable tool for the engineer in research and development. But also in the field of non-destructive quality control holographic test equipment is now accepted for tests within the production line. The producer of aircraft tyres e. g. are using holographic tests to prove the guarantee of their tyres. Together with image processing the whole test cycle is automatisized. The defects within the tyre are found automatically and are listed on an outprint. The power engine industry is using holographic vibration tests for the optimization of their constructions. In the plastics industry tanks, wheels, seats and fans are tested holographically to find the optimum of shape. The automotive industry makes holography a tool for noise reduction. Instant holography and image processing techniques for quantitative analysis have led to an economic application of holographic test methods. New developments of holographic units in combination with image processing are presented.

  17. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  18. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  19. FLIPS: Friendly Lisp Image Processing System

    NASA Astrophysics Data System (ADS)

    Gee, Shirley J.

    1991-08-01

    The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.

  20. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-09-25

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  1. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  2. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  3. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  4. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-08-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  5. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms. PMID:23286072

  6. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  7. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system.

  8. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  9. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  10. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  11. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  12. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  13. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  14. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  15. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  16. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  17. Bitplane Image Coding With Parallel Coefficient Processing.

    PubMed

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible.

  18. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  19. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing. PMID:11567193

  20. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  1. Translational motion compensation in ISAR image processing.

    PubMed

    Wu, H; Grenier, D; Delisle, G Y; Fang, D G

    1995-01-01

    In inverse synthetic aperture radar (ISAR) imaging, the target rotational motion with respect to the radar line of sight contributes to the imaging ability, whereas the translational motion must be compensated out. This paper presents a novel two-step approach to translational motion compensation using an adaptive range tracking method for range bin alignment and a recursive multiple-scatterer algorithm (RMSA) for signal phase compensation. The initial step of RMSA is equivalent to the dominant-scatterer algorithm (DSA). An error-compensating point source is then recursively synthesized from the selected range bins, where each contains a prominent scatterer. Since the clutter-induced phase errors are reduced by phase averaging, the image speckle noise can be reduced significantly. Experimental data processing for a commercial aircraft and computer simulations confirm the validity of the approach.

  2. Architecture for web-based image processing

    NASA Astrophysics Data System (ADS)

    Srini, Vason P.; Pini, David; Armstrong, Matt D.; Alalusi, Sayf H.; Thendean, John; Ueng, Sain-Zee; Bushong, David P.; Borowski, Erek S.; Chao, Elaine; Rabaey, Jan M.

    1997-09-01

    A computer systems architecture for processing medical images and other data coming over the Web is proposed. The architecture comprises a Java engine for communicating images over the Internet, storing data in local memory, doing floating point calculations, and a coprocessor MIMD parallel DSP for doing fine-grained operations found in video, graphics, and image processing applications. The local memory is shared between the Java engine and the parallel DSP. Data coming from the Web is stored in the local memory. This approach avoids the frequent movement of image data between a host processor's memory and an image processor's memory, found in many image processing systems. A low-power and high performance parallel DSP architecture containing lots of processors interconnected by a segmented hierarchical network has been developed. The instruction set of the 16-bit processor supports video, graphics, and image processing calculations. Two's complement arithmetic, saturation arithmetic, and packed instructions are supported. Higher data precision such as 32-bit and 64-bit can be achieved by cascading processors. A VLSI chip implementation of the architecture containing 64 processors organized in 16 clusters and interconnected by a statically programmable hierarchical bus is in progress. The buses are segmentable by programming switches on the bus. The instruction memory of each processor has sixteen 40-bit words. Data streaming through the processor is manipulated by the instructions. Multiple operations can be performed in a single cycle in a processor. A low-power handshake protocol is used for synchronization between the sender and the receiver of data. Temporary storage for data and filter coefficients is provided in each chip. A 256 by 16 memory unit is included in each of the 16 clusters. The memory unit can be used as a delay line, FIFO, lookup table or random access memory. The architecture is scalable with technology. Portable multimedia terminals like U

  3. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  4. Analysis of physical processes via imaging vectors

    NASA Astrophysics Data System (ADS)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  5. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  6. Novel image processing approach to detect malaria

    NASA Astrophysics Data System (ADS)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  7. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  8. Sorting Olive Batches for the Milling Process Using Image Processing

    PubMed Central

    Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  9. Sorting Olive Batches for the Milling Process Using Image Processing.

    PubMed

    Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  10. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  11. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  12. FITSH- a software package for image processing

    NASA Astrophysics Data System (ADS)

    Pál, András.

    2012-04-01

    In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package both provides utilities for the full pipeline of subsequent related data-processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple-image combinations, spatial transformations and interpolations) and aids the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The set of utilities found in this package is built on top of the commonly used UNIX/POSIX shells (hence the name of the package); therefore, both frequently used and well-documented tools for such environments can be exploited and managing a massive amount of data is rather convenient.

  13. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  14. Portable EDITOR (PEDITOR): A portable image processing system. [satellite images

    NASA Technical Reports Server (NTRS)

    Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.

    1986-01-01

    The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.

  15. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  16. Image processing on MPP-like arrays

    SciTech Connect

    Coletti, N.B.

    1983-01-01

    The desirability and suitability of using very large arrays of processors such as the Massively Parallel Processor (MPP) for processing remotely sensed images is investigated. The dissertation can be broken into two areas. The first area is the mathematical analysis of emultating the Bitonic Sorting Network on an array of processors. This sort is useful in histogramming images that have a very large number of pixel values (or gray levels). The optimal number of routing steps required to emulate a N = 2/sup k/ x 2/sup k/ element network on a 2/sup n/ x 2/sup n/ array (k less than or equal to n less than or equal to 7), provided each processor contains one element before and after every merge sequence, is proved to be 14 ..sqrt..N - 4log/sub 2/N - 14. Several already existing emulations achieve this lower bound. The number of elements sorted dictates a particular sorting network, and hence the number of routing steps. It is established that the cardinality N = 3/4 x 2/sup 2n/ elements used the absolute minimum routing steps, 8 ..sqrt..3 ..sqrt..N -4log/sub 2/N - (20 - 4log/sub 2/3). An algorithm achieving this bound is presented. The second area covers the implementations of the image processing tasks. In particular the histogramming of large numbers of gray-levels, geometric distortion determination and its efficient correction, fast Fourier transforms, and statistical clustering are investigated.

  17. Development of the SOFIA Image Processing Tool

    NASA Technical Reports Server (NTRS)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  18. Image processing and the Arithmetic Fourier Transform

    SciTech Connect

    Tufts, D.W.; Fan, Z.; Cao, Z.

    1989-01-01

    A new Fourier technique, the Arithmetic Fourier Transform (AFT) was recently developed for signal processing. This approach is based on the number-theoretic method of Mobius inversion. The AFT needs only additions except for a small amount of multiplications by prescribed scale factors. This new algorithm is also well suited to parallel processing. And there is no accumulation of rounding errors in the AFT algorithm. In this reprint, the AFT is used to compute the discrete cosine transform and is also extended to 2-D cases for image processing. A 2-D Mobius inversion formula is proved. It is then applied to the computation of Fourier coefficients of a periodic 2-D function. It is shown that the output of an array of delay-line (or transversal) filters is the Mobius transform of the input harmonic terms. The 2-D Fourier coefficients can therefore be obtained through Mobius inversion of the output of the filter array.

  19. HYMOSS signal processing for pushbroom spectral imaging

    NASA Technical Reports Server (NTRS)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  20. HYMOSS signal processing for pushbroom spectral imaging

    NASA Astrophysics Data System (ADS)

    Ludwig, David E.

    1991-06-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  1. A New Image Processing and GIS Package

    NASA Technical Reports Server (NTRS)

    Rickman, D.; Luvall, J. C.; Cheng, T.

    1998-01-01

    The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

  2. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  3. Using Image Processing to Determine Emphysema Severity

    NASA Astrophysics Data System (ADS)

    McKenzie, Alexander; Sadun, Alberto

    2010-10-01

    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  4. Image processing to optimize wave energy converters

    NASA Astrophysics Data System (ADS)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  5. Platform for distributed image processing and image retrieval

    NASA Astrophysics Data System (ADS)

    Gueld, Mark O.; Thies, Christian J.; Fischer, Benedikt; Keysers, Daniel; Wein, Berthold B.; Lehmann, Thomas M.

    2003-06-01

    We describe a platform for the implementation of a system for content-based image retrieval in medical applications (IRMA). To cope with the constantly evolving medical knowledge, the platform offers a flexible feature model to store and uniformly access all feature types required within a multi-step retrieval approach. A structured generation history for each feature allows the automatic identification and re-use of already computed features. The platform uses directed acyclic graphs composed of processing steps and control elements to model arbitrary retrieval algorithms. This visually intuitive, data-flow oriented representation vastly improves the interdisciplinary communication between computer scientists and physicians during the development of new retrieval algorithms. The execution of the graphs is fully automated within the platform. Each processing step is modeled as a feature transformation. Due to a high degree of system transparency, both the implementation and the evaluation of retrieval algorithms are accelerated significantly. The platform uses a client-server architecture consisting of a central database, a central job scheduler, instances of a daemon service, and clients which embed user-implemented feature ansformations. Automatically distributed batch processing and distributed feature storage enable the cost-efficient use of an existing workstation cluster.

  6. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  7. Multispectral image processing: the nature factor

    NASA Astrophysics Data System (ADS)

    Watkins, Wendell R.

    1998-09-01

    The images processed by our brain represent our window into the world. For some animals this window is derived from a single eye, for others, including humans, two eyes provide stereo imagery, for others like the black widow spider several eyes are used (8 eyes), and some insects like the common housefly utilize thousands of eyes (ommatidia). Still other animals like the bat and dolphin have eyes for regular vision, but employ acoustic sonar vision for seeing where their regular eyes don't work such as in pitch black caves or turbid water. Of course, other animals have adapted to dark environments by bringing along their own lighting such as the firefly and several creates from the depths of the ocean floor. Animal vision is truly varied and has developed over millennia in many remarkable ways. We have learned a lot about vision processes by studying these animal systems and can still learn even more.

  8. Bone feature analysis using image processing techniques.

    PubMed

    Liu, Z Q; Austin, T; Thomas, C D; Clement, J G

    1996-01-01

    In order to establish the correlation between bone structure and age, and information about age-related bone changes, it is necessary to study microstructural features of human bone. Traditionally, in bone biology and forensic science, the analysis if bone cross-sections has been carried out manually. Such a process is known to be slow, inefficient and prone to human error. Consequently, the results obtained so far have been unreliable. In this paper we present a new approach to quantitative analysis of cross-sections of human bones using digital image processing techniques. We demonstrate that such a system is able to extract various bone features consistently and is capable of providing more reliable data and statistics for bones. Consequently, we will be able to correlate features of bone microstructure with age and possibly also with age related bone diseases such as osteoporosis. The development of knowledge-based computer vision-systems for automated bone image analysis can now be considered feasible.

  9. Signal processing for imaging and mapping ladar

    NASA Astrophysics Data System (ADS)

    Grönwall, Christina; Tolt, Gustav

    2011-11-01

    The new generation laser-based FLASH 3D imaging sensors enable data collection at video rate. This opens up for realtime data analysis but also set demands on the signal processing. In this paper the possibilities and challenges with this new data type are discussed. The commonly used focal plane array based detectors produce range estimates that vary with the target's surface reflectance and target range, and our experience is that the built-in signal processing may not compensate fully for that. We propose a simple adjustment that can be used even if some sensor parameters are not known. The cost for the instantaneous image collection is, compared to scanning laser radar systems, lower range accuracy. By gathering range information from several frames the geometrical information of the target can be obtained. We also present an approach of how range data can be used to remove foreground clutter in front of a target. Further, we illustrate how range data enables target classification in near real-time and that the results can be improved if several frames are co-registered. Examples using data from forest and maritime scenes are shown.

  10. MISR Browse Images: Cold Land Processes Experiment (CLPX)

    Atmospheric Science Data Center

    2013-04-02

    MISR Browse Images: Cold Land Processes Experiment (CLPX) These MISR Browse ... series of images over the region observed during the NASA Cold Land Processes Experiment (CLPX). CLPX involved ground, airborne, and ...

  11. Research on pavement crack recognition methods based on image processing

    NASA Astrophysics Data System (ADS)

    Cai, Yingchun; Zhang, Yamin

    2011-06-01

    In order to overview and analysis briefly pavement crack recognition methods , then find the current existing problems in pavement crack image processing, the popular methods of crack image processing such as neural network method, morphology method, fuzzy logic method and traditional image processing .etc. are discussed, and some effective solutions to those problems are presented.

  12. ATM experiment S-056 image processing requirements definition

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A plan is presented for satisfying the image data processing needs of the S-056 Apollo Telescope Mount experiment. The report is based on information gathered from related technical publications, consultation with numerous image processing experts, and on the experience that was in working on related image processing tasks over a two-year period.

  13. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  14. Spatiotemporal computed tomography of dynamic processes

    NASA Astrophysics Data System (ADS)

    Kaestner, Anders; Münch, Beat; Trtik, Pavel; Butler, Les

    2011-12-01

    Modern computed tomography (CT) equipment allowing fast 3-D imaging also makes it possible to monitor dynamic processes by 4-D imaging. Because the acquisition time of various 3-D-CT systems is still in the range of at least milliseconds or even hours, depending on the detector system and the source, the balance of the desired temporal and spatial resolution must be adjusted. Furthermore, motion artifacts will occur, especially at high spatial resolution and longer measuring times. We propose two approaches based on nonsequential projection angle sequences allowing a convenient postacquisition balance of temporal and spatial resolution. Both strategies are compatible with existing instruments, needing only a simple reprograming of the angle list used for projection acquisition and care with the projection order list. Both approaches will reduce the impact of artifacts due to motion. The strategies are applied and validated with cold neutron imaging of water desorption from originally saturated particles during natural air-drying experiments and with x-ray tomography of a polymer blend heated during imaging.

  15. Methods for processing and imaging marsh foraminifera

    USGS Publications Warehouse

    Dreher, Chandra A.; Flocks, James G.

    2011-01-01

    This study is part of a larger U.S. Geological Survey (USGS) project to characterize the physical conditions of wetlands in southwestern Louisiana. Within these wetlands, groups of benthic foraminifera-shelled amoeboid protists living near or on the sea floor-can be used as agents to measure land subsidence, relative sea-level rise, and storm impact. In the Mississippi River Delta region, intertidal-marsh foraminiferal assemblages and biofacies were established in studies that pre-date the 1970s, with a very limited number of more recent studies. This fact sheet outlines this project's improved methods, handling, and modified preparations for the use of Scanning Electron Microscope (SEM) imaging of these foraminifera. The objective is to identify marsh foraminifera to the taxonomic species level by using improved processing methods and SEM imaging for morphological characterization in order to evaluate changes in distribution and frequency relative to other environmental variables. The majority of benthic marsh foraminifera consists of agglutinated forms, which can be more delicate than porcelaneous forms. Agglutinated tests (shells) are made of particles such as sand grains or silt and clay material, whereas porcelaneous tests consist of calcite.

  16. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  17. Intelligent elevator management system using image processing

    NASA Astrophysics Data System (ADS)

    Narayanan, H. Sai; Karunamurthy, Vignesh; Kumar, R. Barath

    2015-03-01

    In the modern era, the increase in the number of shopping malls and industrial building has led to an exponential increase in the usage of elevator systems. Thus there is an increased need for an effective control system to manage the elevator system. This paper is aimed at introducing an effective method to control the movement of the elevators by considering various cases where in the location of the person is found and the elevators are controlled based on various conditions like Load, proximity etc... This method continuously monitors the weight limit of each elevator while also making use of image processing to determine the number of persons waiting for an elevator in respective floors. Canny edge detection technique is used to find out the number of persons waiting for an elevator. Hence the algorithm takes a lot of cases into account and locates the correct elevator to service the respective persons waiting in different floors.

  18. Image processing and products for the Magellan mission to Venus

    NASA Technical Reports Server (NTRS)

    Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche

    1992-01-01

    The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.

  19. Spot restoration for GPR image post-processing

    DOEpatents

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  20. Image and Signal Processing LISP Environment (ISLE)

    SciTech Connect

    Azevedo, S.G.; Fitch, J.P.; Johnson, R.R.; Lager, D.L.; Searfus, R.M.

    1987-10-02

    We have developed a multidimensional signal processing software system called the Image and Signal LISP Environment (ISLE). It is a hybrid software system, in that it consists of a LISP interpreter (used as the command processor) combined with FORTRAN, C, or LISP functions (used as the processing and display routines). Learning the syntax for ISLE is relatively simple and has the additional benefit of introducing a subset of commands from the general-purpose programming language, Common LISP. Because Common LISP is a well-documented and complete language, users do not need to depend exclusively on system developers for a description of the features of the command language, nor do the developers need to generate a command parser that exhaustively satisfies all the user requirements. Perhaps the major reason for selecting the LISP environment is that user-written code can be added to the environment through a ''foreign function'' interface without recompiling the entire system. The ability to perform fast prototyping of new algorithms is an important feature of this environment. As currently implemented, ISLE requires a Sun color or monochrome workstation and a license to run Franz Extended Common LISP. 16 refs., 4 figs.

  1. Image Processing of Vega-Tv Observations

    NASA Astrophysics Data System (ADS)

    Möhlmann, D.; Danz, M.; Elter, G.; Mangold, T.; Rubbert, B.; Weidlich, U.; Lorenz, H.; Richter, G.

    1986-12-01

    Different algorithms, used to identify real structures in the near-nucleus TV-images of the VEGA-spacecrafts are described. They refer mainly to image-restauration, noise-reduction and different methods of texture analysis. The resulting images, showing first indications for structure of the surface of P/Halley, are discussed shortly.

  2. Post-digital image processing based on microlens array

    NASA Astrophysics Data System (ADS)

    Shi, Chaiyuan; Xu, Feng

    2014-10-01

    Benefit from the attractive features such as compact volume, thin and lightweight, the imaging systems based on microlens array have become an active area of research. However, current imaging systems based on microlens array have insufficient imaging quality so that it cannot meet the practical requirements in most applications. As a result, the post-digital image processing for image reconstruction from the low-resolution sub-image sequence becomes particularly important. In general, the post-digital image processing mainly includes two parts: the accurate estimation of the motion parameters between the sub-image sequence and the reconstruction of high resolution image. In this paper, given the fact that the preprocessing of the unit image can make the edge of the reconstructed high-resolution image clearer, the low-resolution images are preprocessed before the post-digital image processing. Then, after the processing of the pixel rearrange method, a high-resolution image is obtained. From the result, we find that the edge of the reconstructed high-resolution image is clearer than that without preprocessing.

  3. An integral design strategy combining optical system and image processing to obtain high resolution images

    NASA Astrophysics Data System (ADS)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  4. VIP: Vortex Image Processing pipeline for high-contrast direct imaging of exoplanets

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Christiaens, Valentin; Absil, Olivier; Mawet, Dimitri

    2016-03-01

    VIP (Vortex Image Processing pipeline) provides pre- and post-processing algorithms for high-contrast direct imaging of exoplanets. Written in Python, VIP provides a very flexible framework for data exploration and image processing and supports high-contrast imaging observational techniques, including angular, reference-star and multi-spectral differential imaging. Several post-processing algorithms for PSF subtraction based on principal component analysis are available as well as the LLSG (Local Low-rank plus Sparse plus Gaussian-noise decomposition) algorithm for angular differential imaging. VIP also implements the negative fake companion technique coupled with MCMC sampling for rigorous estimation of the flux and position of potential companions.

  5. Human skin surface evaluation by image processing

    NASA Astrophysics Data System (ADS)

    Zhu, Liangen; Zhan, Xuemin; Xie, Fengying

    2003-12-01

    Human skin gradually lose its tension and becomes very dry as time flies by. Use of cosmetics is effective to prevent skin aging. Recently, there are many choices of products of cosmetics. To show their effects, It is desirable to develop a way to evaluate quantificationally skin surface condition. In this paper, An automatic skin evaluating method is proposed. The skin surface has the pattern called grid-texture. This pattern is composed of the valleys that spread vertically, horizontally, and obliquely and the hills separated by them. Changes of the grid are closely linked to the skin surface condition. They can serve as a good indicator for the skin condition. By measuring the skin grid using digital image processing technologies, we can evaluate skin surface about its aging, health, and alimentary status. In this method, the skin grid is first detected to form a closed net. Then, some skin parameters such as Roughness, tension, scale and gloss can be calculated from the statistical measurements of the net. Through analyzing these parameters, the condition of the skin can be monitored.

  6. Precision processing of earth image data

    NASA Technical Reports Server (NTRS)

    Bernstein, R.; Stierhoff, G. C.

    1976-01-01

    Precise corrections of Landsat data are useful for generating land-use maps, detecting various crops and determining their acreage, and detecting changes. The paper discusses computer processing and visualization techniques for Landsat data so that users can get more information from the imagery. The elementary unit of data in each band of each scene is the integrated value of intensity of reflected light detected in the field of view by each sensor. To develop the basic mathematical approach for precision correction of the data, differences between positions of ground control points on the reference map and the observed control points in the scene are used to evaluate the coefficients of cubic time functions of roll, pitch, and yaw, and a linear time function of altitude deviation from normal height above local earth's surface. The resultant equation, termed a mapping function, corrects the warped data image into one that approximates the reference map. Applications are discussed relative to shade prints, extraction of road features, and atlas of cities.

  7. Image-processing pipelines: applications in magnetic resonance histology

    NASA Astrophysics Data System (ADS)

    Johnson, G. Allan; Anderson, Robert J.; Cook, James J.; Long, Christopher; Badea, Alexandra

    2016-03-01

    Image processing has become ubiquitous in imaging research—so ubiquitous that it is easy to loose track of how diverse this processing has become. The Duke Center for In Vivo Microscopy has pioneered the development of Magnetic Resonance Histology (MRH), which generates large multidimensional data sets that can easily reach into the tens of gigabytes. A series of dedicated image-processing workstations and associated software have been assembled to optimize each step of acquisition, reconstruction, post-processing, registration, visualization, and dissemination. This talk will describe the image-processing pipelines from acquisition to dissemination that have become critical to our everyday work.

  8. DTV color and image processing: past, present, and future

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Yeong; Lee, SeongDeok; Park, Du-Sik; Kwak, Youngshin

    2006-01-01

    The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.

  9. Cardiovascular Imaging and Image Processing: Theory and Practice - 1975

    NASA Technical Reports Server (NTRS)

    Harrison, Donald C. (Editor); Sandler, Harold (Editor); Miller, Harry A. (Editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

    1975-01-01

    Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

  10. Image processing techniques for digital orthophotoquad production

    USGS Publications Warehouse

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  11. An Image Processing Algorithm Based On FMAT

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  12. Viking image processing. [digital stereo imagery and computer mosaicking

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  13. Stature estimation from skull measurements using multidetector computed tomographic images: A Japanese forensic sample.

    PubMed

    Torimitsu, Suguru; Makino, Yohsuke; Saitoh, Hisako; Sakuma, Ayaka; Ishii, Namiko; Yajima, Daisuke; Inokuchi, Go; Motomura, Ayumi; Chiba, Fumiko; Yamaguchi, Rutsuko; Hashimoto, Mari; Hoshioka, Yumi; Iwase, Hirotaro

    2016-01-01

    The aim of this study was to assess the correlation between stature and cranial measurements in a contemporary Japanese population, using three-dimensional (3D) computed tomographic (CT) images. A total of 228 cadavers (123 males, 105 females) underwent postmortem CT scanning and subsequent forensic autopsy between May 2011 and April 2015. Five cranial measurements were taken from 3D CT reconstructed images that extracted only cranial data. The correlations between stature and each of the cranial measurements were assessed with Pearson product-moment correlation coefficients. Simple and multiple regression analyses showed significant correlations between stature and cranial measurements. In conclusion, cranial measurements obtained from 3D CT images may be useful for forensic estimation of the stature of Japanese individuals, particularly in cases where better predictors, such as long bones, are not available.

  14. Survey on Neural Networks Used for Medical Image Processing

    PubMed Central

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2010-01-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques. PMID:26740861

  15. Evaluating 3D registration of CT-scan images using crest lines

    NASA Astrophysics Data System (ADS)

    Ayache, Nicholas; Gueziec, Andre P.; Thirion, Jean-Philippe; Gourdon, A.; Knoplioch, Jerome

    1993-06-01

    We consider the issue of matching 3D objects extracted from medical images. We show that crest lines computed on the object surfaces correspond to meaningful anatomical features, and that they are stable with respect to rigid transformations. We present the current chain of algorithmic modules which automatically extract the major crest lines in 3D CT-Scan images, and then use differential invariants on these lines to register together the 3D images with a high precision. The extraction of the crest lines is done by computing up to third order derivatives of the image intensity function with appropriate 3D filtering of the volumetric images, and by the 'marching lines' algorithm. The recovered lines are then approximated by splines curves, to compute at each point a number of differential invariants. Matching is finally performed by a new geometric hashing method. The whole chain is now completely automatic, and provides extremely robust and accurate results, even in the presence of severe occlusions. In this paper, we briefly describe the whole chain of processes, already presented to evaluate the accuracy of the approach on a couple of CT-scan images of a skull containing external markers.

  16. Medical image processing on the GPU - past, present and future.

    PubMed

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges.

  17. Image processing methods for visual prostheses based on DSP

    NASA Astrophysics Data System (ADS)

    Liu, Huwei; Zhao, Ying; Tian, Yukun; Ren, Qiushi; Chai, Xinyu

    2008-12-01

    Visual prostheses for extreme vision impairment have come closer to reality during these few years. The task of this research has been to design exoteric devices and study image processing algorithms and methods for different complexity images. We have developed a real-time system capable of image capture and processing to obtain most available and important image features for recognition and simulation experiment based on DSP (Digital Signal Processor). Beyond developing hardware system, we introduce algorithms such as resolution reduction, information extraction, dilation and erosion, square (circular) pixelization and Gaussian pixelization. And we classify images with different stages according to different complexity such as simple images, medium complex images, complex images. As a result, this paper will get the needed signal for transmitting to electrode array and images for simulation experiment.

  18. Design of a distributed CORBA based image processing server.

    PubMed

    Giess, C; Evers, H; Heid, V; Meinzer, H P

    2000-01-01

    This paper presents the design and implementation of a distributed image processing server based on CORBA. Existing image processing tools were encapsulated in a common way with this server. Data exchange and conversion is done automatically inside the server, hiding these tasks from the user. The different image processing tools are visible as one large collection of algorithms and due to the use of CORBA are accessible via intra-/internet.

  19. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  20. Automated Image Processing : An Efficient Pipeline Data-Flow Architecture

    NASA Astrophysics Data System (ADS)

    Barreault, G.; Rivoire, A.; Jourlin, M.; Laboure, M. J.; Ramon, S.; Zeboudj, R.; Pinoli, J. C.

    1987-10-01

    In the context of Expert-Systems there is a pressing need of efficient Image Processing algorithms to fit the various applications. This paper presents a new electronic card that performs Image Acquisition, Processing and Display, with an IBM-PC/XT or AT as a host computer. This card features a Pipeline data flow architecture, an efficient and cost effective solution to most of the Image Processing problems.

  1. Optimizing signal and image processing applications using Intel libraries

    NASA Astrophysics Data System (ADS)

    Landré, Jérôme; Truchetet, Frédéric

    2007-01-01

    This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

  2. A color image processing pipeline for digital microscope

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

    2012-10-01

    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  3. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  4. Experiences with digital processing of images at INPE

    NASA Technical Reports Server (NTRS)

    Mascarenhas, N. D. A. (Principal Investigator)

    1984-01-01

    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  5. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  6. Feasibility studies of optical processing of image bandwidth compression schemes

    NASA Astrophysics Data System (ADS)

    Hunt, B. R.; Strickland, R. N.; Schowengerdt, R. A.

    1983-05-01

    This research focuses on these three areas: (1) formulation of alternative architectural concepts for image bandwidth compression, i.e., the formulation of components and schematic diagrams which differ from conventional digital bandwidth compression schemes by being implemented by various optical computation methods; (2) simulation of optical processing concepts for image bandwidth compression, so as to gain insight into typical performance parameters and elements of system performance sensitivity; and (3) maturation of optical processing for image bandwidth compression until the overall state of optical methods in image compression becomes equal to that of digital image compression.

  7. Breast image pre-processing for mammographic tissue segmentation.

    PubMed

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  8. Using quantum filters to process images of diffuse axonal injury

    NASA Astrophysics Data System (ADS)

    Pineda Osorio, Mateo

    2014-06-01

    Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

  9. The Development of Sun-Tracking System Using Image Processing

    PubMed Central

    Lee, Cheng-Dar; Huang, Hong-Cheng; Yeh, Hong-Yih

    2013-01-01

    This article presents the development of an image-based sun position sensor and the algorithm for how to aim at the Sun precisely by using image processing. Four-quadrant light sensors and bar-shadow photo sensors were used to detect the Sun's position in the past years. Nevertheless, neither of them can maintain high accuracy under low irradiation conditions. Using the image-based Sun position sensor with image processing can address this drawback. To verify the performance of the Sun-tracking system including an image-based Sun position sensor and a tracking controller with embedded image processing algorithm, we established a Sun image tracking platform and did the performance testing in the laboratory; the results show that the proposed Sun tracking system had the capability to overcome the problem of unstable tracking in cloudy weather and achieve a tracking accuracy of 0.04°. PMID:23615582

  10. Airy-Kaup-Kupershmidt filters applied to digital image processing

    NASA Astrophysics Data System (ADS)

    Hoyos Yepes, Laura Cristina

    2015-09-01

    The Kaup-Kupershmidt operator is applied to the two-dimensional solution of the Airy-diffusion equation and the resulting filter is applied via convolution to image processing. The full procedure is implemented using Maple code with the package ImageTools. Some experiments were performed using a wide category of images including biomedical images generated by magnetic resonance, computarized axial tomography, positron emission tomography, infrared and photon diffusion. The Airy-Kaup-Kupershmidt filter can be used as a powerful edge detector and as powerful enhancement tool in image processing. It is expected that the Airy-Kaup-Kupershmidt could be incorporated in standard programs for image processing such as ImageJ.

  11. Mathematical Morphology Techniques For Image Processing Applications In Biomedical Imaging

    NASA Astrophysics Data System (ADS)

    Bartoo, Grace T.; Kim, Yongmin; Haralick, Robert M.; Nochlin, David; Sumi, Shuzo M.

    1988-06-01

    Mathematical morphology operations allow object identification based on shape and are useful for grouping a cluster of small objects into one object. Because of these capabilities, we have implemented and evaluated this technique for our study of Alzheimer's disease. The microscopic hallmark of Alzheimer's disease is the presence of brain lesions known as neurofibrillary tangles and senile plaques. These lesions have distinct shapes compared to normal brain tissue. Neurofibrillary tangles appear as flame-shaped structures, whereas senile plaques appear as circular clusters of small objects. In order to quantitatively analyze the distribution of these lesions, we have developed and applied the tools of mathematical morphology on the Pixar Image Computer. As a preliminary test of the accuracy of the automatic detection algorithm, a study comparing computer and human detection of senile plaques was performed by evaluating 50 images from 5 different patients. The results of this comparison demonstrates that the computer counts correlate very well with the human counts (correlation coefficient = .81). Now that the basic algorithm has been shown to work, optimization of the software will be performed to improve its speed. Also future improvements such as local adaptive thresholding will be made to the image analysis routine to further improve the systems accuracy.

  12. Image data processing of earth resources management. [technology transfer

    NASA Technical Reports Server (NTRS)

    Desio, A. W.

    1974-01-01

    Various image processing and information extraction systems are described along with the design and operation of an interactive multispectral information system, IMAGE 100. Analyses of ERTS data, using IMAGE 100, over a number of U.S. sites are presented. The following analyses are included: investigations of crop inventory and management using remote sensing; and (2) land cover classification for environmental impact assessments. Results show that useful information is provided by IMAGE 100 analyses of ERTS data in digital form.

  13. High resolution image processing on low-cost microcomputers

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1993-01-01

    Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

  14. Protocols for Image Processing based Underwater Inspection of Infrastructure Elements

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; Pakrashi, Vikram

    2015-07-01

    Image processing can be an important tool for inspecting underwater infrastructure elements like bridge piers and pile wharves. Underwater inspection often relies on visual descriptions of divers who are not necessarily trained in specifics of structural degradation and the information may often be vague, prone to error or open to significant variation of interpretation. Underwater vehicles, on the other hand can be quite expensive to deal with for such inspections. Additionally, there is now significant encouragement globally towards the deployment of more offshore renewable wind turbines and wave devices and the requirement for underwater inspection can be expected to increase significantly in the coming years. While the merit of image processing based assessment of the condition of underwater structures is understood to a certain degree, there is no existing protocol on such image based methods. This paper discusses and describes an image processing protocol for underwater inspection of structures. A stereo imaging image processing method is considered in this regard and protocols are suggested for image storage, imaging, diving, and inspection. A combined underwater imaging protocol is finally presented which can be used for a variety of situations within a range of image scenes and environmental conditions affecting the imaging conditions. An example of detecting marine growth is presented of a structure in Cork Harbour, Ireland.

  15. Recent developments in neutron imaging with applications for porous media research

    NASA Astrophysics Data System (ADS)

    Kaestner, Anders P.; Trtik, Pavel; Zarebanadkouki, Mohsen; Kazantsev, Daniil; Snehota, Michal; Dobson, Katherine J.; Lehmann, Eberhard H.

    2016-09-01

    Computed tomography has become a routine method for probing processes in porous media, and the use of neutron imaging is especially suited to the study of the dynamics of hydrogenous fluids, and of fluids in a high-density matrix. In this paper we give an overview of recent developments in both instrumentation and methodology at the neutron imaging facilities NEUTRA and ICON at the Paul Scherrer Institut. Increased acquisition rates coupled to new reconstruction techniques improve the information output for fewer projection data, which leads to higher volume acquisition rates. Together, these developments yield significantly higher spatial and temporal resolutions, making it possible to capture finer details in the spatial distribution of the fluid, and to increase the acquisition rate of 3-D CT volumes. The ability to add a second imaging modality, e.g., X-ray tomography, further enhances the feature and process information that can be collected, and these features are ideal for dynamic experiments of fluid distribution in porous media. We demonstrate the performance for a selection of experiments carried out at our neutron imaging instruments.

  16. Monitoring Car Drivers' Condition Using Image Processing

    NASA Astrophysics Data System (ADS)

    Adachi, Kazumasa; Yamamto, Nozomi; Yamamoto, Osami; Nakano, Tomoaki; Yamamoto, Shin

    We have developed a car driver monitoring system for measuring drivers' consciousness, with which we aim to reduce car accidents caused by drowsiness of drivers. The system consists of the following three subsystems: an image capturing system with a pulsed infrared CCD camera, a system for detecting blinking waveform by the images using a neural network with which we can extract images of face and eye areas, and a system for measuring drivers' consciousness analyzing the waveform with a fuzzy inference technique and others. The third subsystem extracts three factors from the waveform first, and analyzed them with a statistical method, while our previous system used only one factor. Our experiments showed that the three-factor method we used this time was more effective to measure drivers' consciousness than the one-factor method we described in the previous paper. Moreover, the method is more suitable for fitting parameters of the system to each individual driver.

  17. Future trends in image processing software and hardware

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1979-01-01

    JPL image processing applications are examined, considering future trends in fields such as planetary exploration, electronics, astronomy, computers, and Landsat. Attention is given to adaptive search and interrogation of large image data bases, the display of multispectral imagery recorded in many spectral channels, merging data acquired by a variety of sensors, and developing custom large scale integrated chips for high speed intelligent image processing user stations and future pipeline production processors.

  18. Image Processing In Laser-Beam-Steering Subsystem

    NASA Technical Reports Server (NTRS)

    Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.

    1996-01-01

    Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

  19. Mimos: a description framework for exchanging medical image processing results.

    PubMed

    Aubry, F; Todd-Pokropek, A

    2001-01-01

    Image processing plays increasingly important role in using medical images, both for routine as for research purposes, due to the growing interest in functional studies (PET, MR, etc.). Unfortunately, there exist nearly as many formats for data and results coding as image processing procedures. If Dicom presently supports a kind of structured reporting of image studies, it does not take into account the semantics of the image handling domain. This can impede the exchange and the interpretation of processing results. In order to facilitate the use of image processing results, we have designed a framework for representing image processing results. This framework, whose principle is called an "ontology" in the literature, extends the formalism, which we have used in our previous work on image databases. It permits a systematic representation of the entities and information involved in the processing, that is not only input data, command parameters, output data, but also software and hardware descriptions, and relationships between these different parameters. Consequently, this framework allows the building of standardized documents, which can be exchanged amongst various users. As the framework is based on a formal grammar, documents can be encoded using XML. They are thus compatible with Internet / Intranet technology. In this paper, the main characteristics of the framework are presented and illustrated. We also discuss implementation issues in order to be able to integrate documents, and correlated images, handling these with a classical Web browser.

  20. Assessment of vessel diameters for MR brain angiography processed images

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian-Dragos; Moldovanu, Simona

    2015-12-01

    The motivation was to develop an assessment method to measure (in)visible differences between the original and the processed images in MR brain angiography as a method of evaluation of the status of the vessel segments (i.e. the existence of the occlusion or intracerebral vessels damaged as aneurysms). Generally, the image quality is limited, so we improve the performance of the evaluation through digital image processing. The goal is to determine the best processing method that allows an accurate assessment of patients with cerebrovascular diseases. A total of 10 MR brain angiography images were processed by the following techniques: histogram equalization, Wiener filter, linear contrast adjustment, contrastlimited adaptive histogram equalization, bias correction and Marr-Hildreth filter. Each original image and their processed images were analyzed into the stacking procedure so that the same vessel and its corresponding diameter have been measured. Original and processed images were evaluated by measuring the vessel diameter (in pixels) on an established direction and for the precise anatomic location. The vessel diameter is calculated using the plugin ImageJ. Mean diameter measurements differ significantly across the same segment and for different processing techniques. The best results are provided by the Wiener filter and linear contrast adjustment methods and the worst by Marr-Hildreth filter.

  1. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  2. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  3. IR camera system with an advanced image processing technologies

    NASA Astrophysics Data System (ADS)

    Ohkubo, Syuichi; Tamura, Tetsuo

    2016-05-01

    We have developed image processing technologies for resolving issues caused by the inherent UFPA (uncooled focal plane array) sensor characteristics to spread its applications. For example, large time constant of an uncooled IR (infra-red) sensor limits its application field, because motion blur is caused in monitoring the objective moving at high speed. The developed image processing technologies can eliminate the blur and retrieve almost the equivalent image observed in still motion. This image processing is based on the idea that output of the IR sensor is construed as the convolution of radiated IR energy from the objective and impulse response of the IR sensor. With knowledge of the impulse response and moving speed of the objective, the IR energy from the objective can be de-convolved from the observed images. We have successfully retrieved the image without blur using the IR sensor of 15 ms time constant under the conditions in which the objective is moving at the speed of about 10 pixels/60 Hz. The image processing for reducing FPN (fixed pattern noise) has also been developed. UFPA having the responsivity in the narrow wavelength region, e.g., around 8 μm is appropriate for measuring the surface of glass. However, it suffers from severe FPN due to lower sensitivity compared with 8-13 μm. The developed image processing exploits the images of the shutter itself, and can reduce FPN significantly.

  4. Image processing system to analyze droplet distributions in sprays

    NASA Technical Reports Server (NTRS)

    Bertollini, Gary P.; Oberdier, Larry M.; Lee, Yong H.

    1987-01-01

    An image processing system was developed which automatically analyzes the size distributions in fuel spray video images. Images are generated by using pulsed laser light to freeze droplet motion in the spray sample volume under study. This coherent illumination source produces images which contain droplet diffraction patterns representing the droplets degree of focus. The analysis is performed by extracting feature data describing droplet diffraction patterns in the images. This allows the system to select droplets from image anomalies and measure only those droplets considered in focus. Unique features of the system are the totally automated analysis and droplet feature measurement from the grayscale image. The feature extraction and image restoration algorithms used in the system are described. Preliminary performance data is also given for two experiments. One experiment gives a comparison between a synthesized distribution measured manually and automatically. The second experiment compares a real spray distribution measured using current methods against the automatic system.

  5. Processing of polarametric SAR images. Final report

    SciTech Connect

    Warrick, A.L.; Delaney, P.A.

    1995-09-01

    The objective of this work was to develop a systematic method of combining multifrequency polarized SAR images. It is shown that the traditional methods of correlation, hard targets, and template matching fail to produce acceptable results. Hence, a new algorithm was developed and tested. The new approach combines the three traditional methods and an interpolation method. An example is shown that demonstrates the new algorithms performance. The results are summarized suggestions for future research are presented.

  6. Processing ISS Images of Titan's Surface

    NASA Technical Reports Server (NTRS)

    Perry, Jason; McEwen, Alfred; Fussner, Stephanie; Turtle, Elizabeth; West, Robert; Porco, Carolyn; Knowles, Ben; Dawson, Doug

    2005-01-01

    One of the primary goals of the Cassini-Huygens mission, in orbit around Saturn since July 2004, is to understand the surface and atmosphere of Titan. Surface investigations are primarily accomplished with RADAR, the Visual and Infrared Mapping Spectrometer (VIMS), and the Imaging Science Subsystem (ISS) [1]. The latter two use methane "windows", regions in Titan's reflectance spectrum where its atmosphere is most transparent, to observe the surface. For VIMS, this produces clear views of the surface near 2 and 5 microns [2]. ISS uses a narrow continuum band filter (CB3) at 938 nanometers. While these methane windows provide our best views of the surface, the images produced are not as crisp as ISS images of satellites like Dione and Iapetus [3] due to the atmosphere. Given a reasonable estimate of contrast (approx.30%), the apparent resolution of features is approximately 5 pixels due to the effects of the atmosphere and the Modulation Transfer Function of the camera [1,4]. The atmospheric haze also reduces contrast, especially with increasing emission angles [5].

  7. Image processing of underwater multispectral imagery

    USGS Publications Warehouse

    Zawada, D.G.

    2003-01-01

    Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.

  8. Need for image processing in infrared camera design

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.; Jones, Martin H.

    2000-03-01

    While the value of image processing has been longly recognized, this is usually done during post-processing. For scientific application, the presence of large noise errors, data drop out, and dead sensors would invalidate any conclusion made from the data until noise-removal and sensor calibration has been accomplished. With the growing need for ruggedized, real-time image acquisition systems, including applications to automotive and aerospace, post processing may not be an option. With post processing, the operator does not have the opportunity to view the cleaned-up image. Focal plane arrays are plagued by bad sensors, high manufacturing costs, and low yields, often forcing a six digit cost tag. Perhaps infrared camera design is too serious an issue to leave to the camera manufacturers. Alternative camera designs using a single spinning mirror can yield perfect infrared images at rates up to 12000 frames per second using a fraction of the hardware in the current focal-plane arrays. Using a 768 X 5 sensor array, redundant 2048 X 768 images are produced by each row of the sensor array. Sensor arrays with flawed sensors would no longer need to be discarded because data from dead sensors can be discarded, thus increasing manufacturing yields and reducing manufacturing costs. Furthermore, very rapid image processing chips are available, allowing for real-time morphological image processing (including real-time sensor calibration), thus significantly increasing thermal precision, making thermal imaging amenable for an increased variety of applications.

  9. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

    NASA Technical Reports Server (NTRS)

    Hong, Yie-Ming

    1973-01-01

    Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

  10. Omega: An Object-Oriented Image/Symbol Processing Environment

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.; Fong, Jennifer B.

    1989-01-01

    A Common Lisp software system to support integrated image and symbolic processing applications is described. The system, termed Omega is implemented on a Symbolics Lisp Machine and is organized into modules to facilitate the development of user applications and for software transportability. An object-oriented programming language similar to Symbolics Zetalisp/Flavors is implemented in Common Lisp and is used for creating symbolic objects known as tokens. Tokens are used to represent images, significant areas in images, and regions that define the spatial extent of the significant areas. The extent of point, line, and areal features is represented by polygons, label maps, boundary points, row- and column-oriented run-length encoded rasters, and bounding rectangles. Macros provide a common means for image processing functions and spatial operators to access spatial representations. The implementation of image processing, segmentation, and symbolic processing functions within Omega are described.

  11. Data management in pattern recognition and image processing systems

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  12. Theoretical Analysis of Radiographic Images by Nonstationary Poisson Processes

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuo; Yamada, Isao; Uchida, Suguru

    1980-12-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples of the one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.

  13. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  14. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets.

  15. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  16. Processing, analysis, recognition, and automatic understanding of medical images

    NASA Astrophysics Data System (ADS)

    Tadeusiewicz, Ryszard; Ogiela, Marek R.

    2004-07-01

    Paper presents some new ideas introducing automatic understanding of the medical images semantic content. The idea under consideration can be found as next step on the way starting from capturing of the images in digital form as two-dimensional data structures, next going throw images processing as a tool for enhancement of the images visibility and readability, applying images analysis algorithms for extracting selected features of the images (or parts of images e.g. objects), and ending on the algorithms devoted to images classification and recognition. In the paper we try to explain, why all procedures mentioned above can not give us full satisfaction in many important medical problems, when we do need understand image semantic sense, not only describe the image in terms of selected features and/or classes. The general idea of automatic images understanding is presented as well as some remarks about the successful applications of such ideas for increasing potential possibilities and performance of computer vision systems dedicated to advanced medical images analysis. This is achieved by means of applying linguistic description of the picture merit content. After this we try use new AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted form the image using linguistic methods and expectations taken from the representation of the medical knowledge, it is possible to understand the merit content of the image even if the form of the image is very different from any known pattern.

  17. An Image Processing Approach to Linguistic Translation

    NASA Astrophysics Data System (ADS)

    Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

    2011-12-01

    The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

  18. Detecting jaundice by using digital image processing

    NASA Astrophysics Data System (ADS)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  19. High performance image processing of SPRINT

    SciTech Connect

    DeGroot, T.

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  20. 3D/2D image registration using weighted histogram of gradient directions

    NASA Astrophysics Data System (ADS)

    Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang

    2015-03-01

    Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.

  1. Evaluation of clinical image processing algorithms used in digital mammography.

    PubMed

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  2. Evaluation of clinical image processing algorithms used in digital mammography.

    PubMed

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  3. Real-time image processing architecture for robot vision

    NASA Astrophysics Data System (ADS)

    Persa, Stelian; Jonker, Pieter P.

    2000-10-01

    This paper presents a study of the impact of MMX technology and PIII Streaming SIMD (Single Instruction stream, Multiple Data stream). Extensions in image processing and machine vision application, which, because of their hard real time constrains, is an undoubtedly challenging task. A comparison with traditional scalar code and with other parallel SIMD architecture (IMPA-VISION board) is discussed with emphasis of the particular programming strategies for speed optimization. More precisely we discuss the low level and intermediate level image processing algorithms, which are best suited for parallel SIMD implementation. High-level image processing algorithms are more suitable for parallel implementation on MIMD architectures. While the IMAP-VISION system performs better because of the large number of processing elements, the MMX processor and PIII (with Streaming SIMD Extensions) remains a good candidate for low-level image processing.

  4. Land image data processing requirements for the EOS era

    NASA Technical Reports Server (NTRS)

    Wharton, Stephen W.; Newcomer, Jeffrey A.

    1989-01-01

    Requirements are proposed for a hybrid approach to image analysis that combines the functionality of a general-purpose image processing system with the knowledge representation and manipulation capabilities associated with expert systems to improve the productivity of scientists in extracting information from remotely sensed image data. The overall functional objectives of the proposed system are to: (1) reduce the level of human interaction required on a scene-by-scene basis to perform repetitive image processing tasks; (2) allow the user to experiment with ad hoc rules and procedures for the extraction, description, and identification of the features of interest; and (3) facilitate the derivation, application, and dissemination of expert knowledge for target recognition whose scope of application is not necessarily limited to the image(s) from which it was derived.

  5. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1972-01-01

    When the first Earth Resources Technology Satellite (ERTS-A) flies in 1972, NASA expects to receive and bulk-process 9,000 images a week. From this deluge of images, a few will be selected for precision processing; that is, about 5 percent will be further treated to improve the geometry of the scene, both in the relative and absolute sense. Control points are required for this processing. This paper describes the control requirements for relating ERTS images to a reference surface of the earth. Enough background on the ERTS-A satellite is included to make the requirements meaningful to the user.

  6. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis.

  7. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. PMID:25676816

  8. Image pre-processing for optimizing automated photogrammetry performances

    NASA Astrophysics Data System (ADS)

    Guidi, G.; Gonizzi, S.; Micoli, L. L.

    2014-05-01

    The purpose of this paper is to analyze how optical pre-processing with polarizing filters and digital pre-processing with HDR imaging, may improve the automated 3D modeling pipeline based on SFM and Image Matching, with special emphasis on optically non-cooperative surfaces of shiny or dark materials. Because of the automatic detection of homologous points, the presence of highlights due to shiny materials, or nearly uniform dark patches produced by low reflectance materials, may produce erroneous matching involving wrong 3D point estimations, and consequently holes and topological errors on the mesh originated by the associated dense 3D cloud. This is due to the limited dynamic range of the 8 bit digital images that are matched each other for generating 3D data. The same 256 levels can be more usefully employed if the actual dynamic range is compressed, avoiding luminance clipping on the darker and lighter image areas. Such approach is here considered both using optical filtering and HDR processing with tone mapping, with experimental evaluation on different Cultural Heritage objects characterized by non-cooperative optical behavior. Three test images of each object have been captured from different positions, changing the shooting conditions (filter/no-filter) and the image processing (no processing/HDR processing), in order to have the same 3 camera orientations with different optical and digital pre-processing, and applying the same automated process to each photo set.

  9. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  10. Subband/Transform MATLAB Functions For Processing Images

    NASA Technical Reports Server (NTRS)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  11. Quantum imaging as an ancilla-assisted process tomography

    NASA Astrophysics Data System (ADS)

    Ghalaii, M.; Afsary, M.; Alipour, S.; Rezakhani, A. T.

    2016-10-01

    We show how a recent experiment of quantum imaging with undetected photons can basically be described as an (a partial) ancilla-assisted process tomography in which the object is described by an amplitude-damping quantum channel. We propose a simplified quantum circuit version of this scenario, which also enables one to recast quantum imaging in quantum computation language. Our analogy and analysis may help us to better understand the role of classical and/or quantum correlations in imaging experiments.

  12. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  13. Recent advances in imaging subcellular processes

    PubMed Central

    Myers, Kenneth A.; Janetopoulos, Christopher

    2016-01-01

    Cell biology came about with the ability to first visualize cells. As microscopy techniques advanced, the early microscopists became the first cell biologists to observe the inner workings and subcellular structures that control life. This ability to see organelles within a cell provided scientists with the first understanding of how cells function. The visualization of the dynamic architecture of subcellular structures now often drives questions as researchers seek to understand the intricacies of the cell. With the advent of fluorescent labeling techniques, better and new optical techniques, and more sensitive and faster cameras, a whole array of questions can now be asked. There has been an explosion of new light microscopic techniques, and the race is on to build better and more powerful imaging systems so that we can further our understanding of the spatial and temporal mechanisms controlling molecular cell biology. PMID:27408708

  14. Computer tomography imaging of fast plasmachemical processes

    SciTech Connect

    Denisova, N. V.; Katsnelson, S. S.; Pozdnyakov, G. A.

    2007-11-15

    Results are presented from experimental studies of the interaction of a high-enthalpy methane plasma bunch with gaseous methane in a plasmachemical reactor. The interaction of the plasma flow with the rest gas was visualized by using streak imaging and computer tomography. Tomography was applied for the first time to reconstruct the spatial structure and dynamics of the reagent zones in the microsecond range by the maximum entropy method. The reagent zones were identified from the emission of atomic hydrogen (the H{sub {alpha}} line) and molecular carbon (the Swan bands). The spatiotemporal behavior of the reagent zones was determined, and their relation to the shock-wave structure of the plasma flow was examined.

  15. Recent advances in imaging subcellular processes.

    PubMed

    Myers, Kenneth A; Janetopoulos, Christopher

    2016-01-01

    Cell biology came about with the ability to first visualize cells. As microscopy techniques advanced, the early microscopists became the first cell biologists to observe the inner workings and subcellular structures that control life. This ability to see organelles within a cell provided scientists with the first understanding of how cells function. The visualization of the dynamic architecture of subcellular structures now often drives questions as researchers seek to understand the intricacies of the cell. With the advent of fluorescent labeling techniques, better and new optical techniques, and more sensitive and faster cameras, a whole array of questions can now be asked. There has been an explosion of new light microscopic techniques, and the race is on to build better and more powerful imaging systems so that we can further our understanding of the spatial and temporal mechanisms controlling molecular cell biology. PMID:27408708

  16. An Image Database on a Parallel Processing Network.

    ERIC Educational Resources Information Center

    Philip, G.; And Others

    1991-01-01

    Describes the design and development of an image database for photographs in the Ulster Museum (Northern Ireland) that used parallelism from a transputer network. Topics addressed include image processing techniques; documentation needed for the photographs, including indexing, classifying, and cataloging; problems; hardware and software aspects;…

  17. Fingerprint pattern restoration by digital image processing techniques.

    PubMed

    Wen, Che-Yen; Yu, Chiu-Chung

    2003-09-01

    Fingerprint evidence plays an important role in solving criminal problems. However, defective (lacking information needed for completeness) or contaminated (undesirable information included) fingerprint patterns make identifying and recognizing processes difficult. Unfortunately. this is the usual case. In the recognizing process (enhancement of patterns, or elimination of "false alarms" so that a fingerprint pattern can be searched in the Automated Fingerprint Identification System (AFIS)), chemical and physical techniques have been proposed to improve pattern legibility. In the identifying process, a fingerprint examiner can enhance contaminated (but not defective) fingerprint patterns under guidelines provided by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), the Scientific Working Group on Imaging Technology (SWGIT), and an AFIS working group within the National Institute of Justice. Recently, the image processing techniques have been successfully applied in forensic science. For example, we have applied image enhancement methods to improve the legibility of digital images such as fingerprints and vehicle plate numbers. In this paper, we propose a novel digital image restoration technique based on the AM (amplitude modulation)-FM (frequency modulation) reaction-diffusion method to restore defective or contaminated fingerprint patterns. This method shows its potential application to fingerprint pattern enhancement in the recognizing process (but not for the identifying process). Synthetic and real images are used to show the capability of the proposed method. The results of enhancing fingerprint patterns by the manual process and our method are evaluated and compared. PMID:14535661

  18. New Windows based Color Morphological Operators for Biomedical Image Processing

    NASA Astrophysics Data System (ADS)

    Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia

    2016-04-01

    Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.

  19. Image processing for flight crew enhanced situation awareness

    NASA Technical Reports Server (NTRS)

    Roberts, Barry

    1993-01-01

    This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

  20. Digital image processing for the earth resources technology satellite data.

    NASA Technical Reports Server (NTRS)

    Will, P. M.; Bakis, R.; Wesley, M. A.

    1972-01-01

    This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.

  1. ELAS: A powerful, general purpose image processing package

    NASA Technical Reports Server (NTRS)

    Walters, David; Rickman, Douglas

    1991-01-01

    ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.

  2. The design of a distributed image processing and dissemination system

    SciTech Connect

    Rafferty, P.; Hower, L.

    1990-01-01

    The design and implementation of a distributed image processing and dissemination system was undertaken and accomplished as part of a prototype communication and intelligence (CI) system, the contingency support system (CSS), which is intended to support contingency operations of the Tactical Air Command. The system consists of six (6) Sun 3/180C workstations with integrated ITEX image processors and three (3) 3/50 diskless workstations located at four (4) system nodes (INEL, base, and mobiles). All 3/180C workstations are capable of image system server functions where as the 3/50s are image system clients only. Distribution is accomplished via both local and wide area networks using standard Defense Data Network (DDN) protocols (i.e., TCP/IP, et al.) and Defense Satellite Communication Systems (DSCS) compatible SHF Transportable Satellite Earth Terminals (TSET). Image applications utilize Sun's Remote Procedure Call (RPC) to facilitate the image system client and server relationships. The system provides functions to acquire, display, annotate, process, transfer, and manage images via an icon, panel, and menu oriented Sunview{trademark} based user interface. Image spatial resolution is 512 {times} 480 with 8-bits/pixel black and white and 12/24 bits/pixel color depending on system configuration. Compression is used during various image display and transmission functions to reduce the dynamic range of image data of 12/6/3/2 bits/pixel depending on the application. Image acquisition is accomplished in real-time or near-real-time by special purpose Itex image hardware. As a result all image displays are highly interactive with attention given to subsecond response time. 3 refs., 7 figs.

  3. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk

    2007-02-01

    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  4. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  5. Multimission image processing and science data visualization

    NASA Technical Reports Server (NTRS)

    Green, William B.

    1993-01-01

    The Operational Science Analysis (OSA) Functional area supports science instrument data display, analysis, visualization and photo processing in support of flight operations of planetary spacecraft managed by the Jet Propulsion Laboratory (JPL). This paper describes the data products generated by the OSA functional area, and the current computer system used to generate these data products. The objectives on a system upgrade now in process are described. The design approach to development of the new system are reviewed, including use of the Unix operating system and X-Window display standards to provide platform independence, portability, and modularity within the new system, is reviewed. The new system should provide a modular and scaleable capability supporting a variety of future missions at JPL.

  6. High Dynamic Range Processing for Magnetic Resonance Imaging

    PubMed Central

    Sukerkar, Preeti A.; Meade, Thomas J.

    2013-01-01

    Purpose To minimize feature loss in T1- and T2-weighted MRI by merging multiple MR images acquired at different TR and TE to generate an image with increased dynamic range. Materials and Methods High Dynamic Range (HDR) processing techniques from the field of photography were applied to a series of acquired MR images. Specifically, a method to parameterize the algorithm for MRI data was developed and tested. T1- and T2-weighted images of a number of contrast agent phantoms and a live mouse were acquired with varying TR and TE parameters. The images were computationally merged to produce HDR-MR images. All acquisitions were performed on a 7.05 T Bruker PharmaScan with a multi-echo spin echo pulse sequence. Results HDR-MRI delineated bright and dark features that were either saturated or indistinguishable from background in standard T1- and T2-weighted MRI. The increased dynamic range preserved intensity gradation over a larger range of T1 and T2 in phantoms and revealed more anatomical features in vivo. Conclusions We have developed and tested a method to apply HDR processing to MR images. The increased dynamic range of HDR-MR images as compared to standard T1- and T2-weighted images minimizes feature loss caused by magnetization recovery or low SNR. PMID:24250788

  7. Image processing in an enhanced and synthetic vision system

    NASA Astrophysics Data System (ADS)

    Mueller, Rupert M.; Palubinskas, Gintautas; Gemperlein, Hans

    2002-07-01

    'Synthetic Vision' and 'Sensor Vision' complement to an ideal system for the pilot's situation awareness. To fuse these two data sets the sensor images are first segmented by a k-means algorithm and then features are extracted by blob analysis. These image features are compared with the features of the projected airport data using fuzzy logic in order to identify the runway in the sensor image and to improve the aircraft navigation data. This process is necessary due to inaccurate input data i.e. position and attitude of the aircraft. After identifying the runway, obstacles can be detected using the sensor image. The extracted information is presented to the pilot's display system and combined with the appropriate information from the MMW radar sensor in a subsequent fusion processor. A real time image processing procedure is discussed and demonstrated with IR measurements of a FLIR system during landing approaches.

  8. Processing infrared images for target detection: A literature study

    NASA Astrophysics Data System (ADS)

    Alblas, B. P.

    1988-07-01

    Methods of image processing applied to IR images to obtain better detection and/or recognition of military targets, particularly vehicles, are reviewed. The following subjects are dealt with: histogram specification, scanline degradation, correlation, clutter and noise. Only a few studies deal with the effects of image processing on human performance. Most of the literature concerns computer vision. Local adaptive and image dependent techniques appear to be the most promising methods of obtaining higher observation performance. In particular the size-contrast box filter and histogram specification methods seem to be suitable. There is a need for a generally applicable definition of image quality and clutter level to evaluate the utility of a specified algorithm. Proposals for further research are given.

  9. Particle sizing in rocket motor studies utilizing hologram image processing

    NASA Technical Reports Server (NTRS)

    Netzer, David; Powers, John

    1987-01-01

    A technique of obtaining particle size information from holograms of combustion products is described. The holograms are obtained with a pulsed ruby laser through windows in a combustion chamber. The reconstruction is done with a krypton laser with the real image being viewed through a microscope. The particle size information is measured with a Quantimet 720 image processing system which can discriminate various features and perform measurements of the portions of interest in the image. Various problems that arise in the technique are discussed, especially those that are a consequence of the speckle due to the diffuse illumination used in the recording process.

  10. Using image processing techniques on proximity probe signals in rotordynamics

    NASA Astrophysics Data System (ADS)

    Diamond, Dawie; Heyns, Stephan; Oberholster, Abrie

    2016-06-01

    This paper proposes a new approach to process proximity probe signals in rotordynamic applications. It is argued that the signal be interpreted as a one dimensional image. Existing image processing techniques can then be used to gain information about the object being measured. Some results from one application is presented. Rotor blade tip deflections can be calculated through localizing phase information in this one dimensional image. It is experimentally shown that the newly proposed method performs more accurately than standard techniques, especially where the sampling rate of the data acquisition system is inadequate by conventional standards.

  11. Quantum processing of images by continuous wave optical parametric amplification.

    PubMed

    Lopez, L; Treps, N; Chalopin, B; Fabre, C; Maître, A

    2008-01-11

    We have experimentally shown that a degenerate optical parametric oscillator pumped by a cw laser, inserted in a cavity having degenerate transverse modes such as a hemiconfocal or confocal cavity, and operating below the oscillation threshold in the regime of phase sensitive amplification, is able to process input images of various shapes in the quantum regime. More precisely, when deamplified, the image is amplitude squeezed; when amplified, its two polarization components are intensity correlated at the quantum level. In addition, the amplification process of the images is shown to take place in the noiseless regime.

  12. Digital image processing of bone - Problems and potentials

    NASA Technical Reports Server (NTRS)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  13. Anomalous diffusion process applied to magnetic resonance image enhancement.

    PubMed

    Senra Filho, A C da S; Salmon, C E Garrido; Murta Junior, L O

    2015-03-21

    Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central χ noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 < q < 1.6, suggesting that the anomalous diffusion regime is more suitable for MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.

  14. Dielectric barrier discharge image processing by Photoshop

    NASA Astrophysics Data System (ADS)

    Dong, Lifang; Li, Xuechen; Yin, Zengqian; Zhang, Qingli

    2001-09-01

    In this paper, the filamentary pattern of dielectric barrier discharge has been processed by using Photoshop, the coordinates of each filament can also be obtained. By using Photoshop two different ways have been used to analyze the spatial order of the pattern formation in dielectric barrier discharge. The results show that the distance of the neighbor filaments at U equals 14 kV and d equals 0.9 mm is about 1.8 mm. In the scope of the experimental error, the results from the two different methods are similar.

  15. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  16. Optimizing the processing and presentation of PPCR imaging

    NASA Astrophysics Data System (ADS)

    Davies, Andrew G.; Cowen, Arnold R.; Parkin, Geoff J. S.; Bury, Robert F.

    1996-03-01

    Photostimulable phosphor computed radiography (CR) is becoming an increasingly popular image acquisition system. The acceptability of this technique, both diagnostically, ergonomically and economically is highly influenced by the method by which the image data is presented to the user. Traditional CR systems utilize an 11' by 14' film hardcopy format, and can place two images per exposure onto this film, which does not correspond to sizes and presentations provided by conventional techniques. It is also the authors' experience that the image enhancement algorithms provided by traditional CR systems do not provide optimal image presentation. An alternative image enhancement algorithm was developed, along with a number of hardcopy formats, designed to match the requirements of the image reporting process. The new image enhancement algorithm, called dynamic range reduction (DRR), is designed to provide a single presentation per exposure, maintaining the appearance of a conventional radiograph, while optimizing the rendition of diagnostically relevant features within the image. The algorithm was developed on a Sun SPARCstation, but later ported to a Philips' EasyVisionRAD workstation. Print formats were developed on the EasyVision to improve the acceptability of the CR hardcopy. For example, for mammographic examinations, four mammograms (a cranio-caudal and medio-lateral view of each breast) are taken for each patient, with all images placed onto a single sheet of 14' by 17' film. The new composite format provides a more suitable image presentation for reporting, and is more economical to produce. It is the use of enhanced image processing and presentation which has enabled all mammography undertaken within the general infirmary to be performed using the CR/EasyVisionRAD DRR/3M 969 combination, without recourse to conventional film/screen mammography.

  17. Parallel processing of ADS40 images on PC network

    NASA Astrophysics Data System (ADS)

    Qiu, Feng; Duan, Yansong; Zhang, Jianqing

    2009-10-01

    In this paper, we aim to design a parallel processing system based on economic hardware environment to optimize photogrammetric process of Leica ADS40 images considering ideas and methods of parallel computing. We adopt parallel computing PCAM principle to design and implement a test system for parallel processing of ADS40 images. The test system consists of common personal computers and local gigabits network. It can make full use of network computing and storage resources under a economical and practical cost to deal with ADS40 images. Experiment shows that it achieves significant improvement of processing efficiency. Furthermore, the robustness and compatibility of this system is much higher than stand alone computer system because of system's redundancy based on network. In conclusion, parallel processing system based on PC network brings us a much more efficiency solution of ADS40's photogrammetric production.

  18. Negative tone imaging process and materials for EUV lighography

    NASA Astrophysics Data System (ADS)

    Tarutani, Shinji; Nihashi, Wataru; Hirano, Shuuji; Yokokawa, Natsumi; Takizawa, Hiroo

    2013-03-01

    The advantages of NTI process in EUV is demonstrated by optical simulation method for 0.25NA and 0.33NA illumination system with view point of optical aerial image quality and photon density. The extendability of NTI for higher NA system is considered for further tight pitch and small size contact hole imaging capability. Process and material design strategy to NTI were discussed with consideration on comparison to ArF NTI process and materials, and challenges in EUV materials dedicated to NTI process were discussed as well. A new polymer was well designed for EUV-NTD process, and the resists formulated with the new polymer demonstrated good advantage of resolution and sensitivity in isolated trench imaging, and 24 nm half pitch resolution at dense C/H, with 0.3NA MET tool.

  19. Applications of nuclear magnetic resonance imaging in process engineering

    NASA Astrophysics Data System (ADS)

    Gladden, Lynn F.; Alexander, Paul

    1996-03-01

    During the past decade, the application of nuclear magnetic resonance (NMR) imaging techniques to problems of relevance to the process industries has been identified. The particular strengths of NMR techniques are their ability to distinguish between different chemical species and to yield information simultaneously on the structure, concentration distribution and flow processes occurring within a given process unit. In this paper, examples of specific applications in the areas of materials and food processing, transport in reactors and two-phase flow are discussed. One specific study, that of the internal structure of a packed column, is considered in detail. This example is reported to illustrate the extent of new, quantitative information of generic importance to many processing operations that can be obtained using NMR imaging in combination with image analysis.

  20. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  1. Digital processing of side-scan sonar data with the Woods Hole image processing system software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

  2. 3D image fusion and guidance for computer-assisted bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, W. E.; Rai, L.; Merritt, S. A.; Lu, K.; Linger, N. T.; Yu, K. C.

    2005-11-01

    The standard procedure for diagnosing lung cancer involves two stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patient's interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.

  3. Integrating digital topology in image-processing libraries.

    PubMed

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  4. Statistical image processing in the Virtual Observatory context

    NASA Astrophysics Data System (ADS)

    Louys, M.; Bonnarel, F.; Schaaff, A.; Pestel, C.

    2009-07-01

    In an inter-disciplinary collaborative project, we have designed a framework to execute statistical image analysis techniques for multiwavelength astronomical images. This paper describes an interactive tool, AIDA_WF , which helps the astronomer to design and describe image processing workflows. This tool allows designing and executing processing steps arranged in a workflow. Blocks can be either local or remote distributed computations via web services built according to the UWS (Universal Worker Service) currently defined in the VO domain. Processing blocks are modelled with input and output parameters. Validation of input images content and parameters is included and performed using the VO Characterisation Data model. This allows first checking of inputs prior to sending the job on remote computing nodes in a distributed or grid context. The workflows can be saved and documented, and collected as well for further re-use.

  5. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    PubMed

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images.

  6. Remote sensing and image processing for exploration in frontier basins

    SciTech Connect

    Sabins, F.F. )

    1993-02-01

    A variety of remote sensing systems are available to explore the wide range of terrain in Central and South America and Mexico. The remote sensing data are recorded in digital form and must be computer-processed to produce images that are suitable for exploration. Landsat and SPOT images are available for most of the earth, but are restricted by cloud-cover. The broad terrain coverage recorded by Landsat thematic mapper (TM) is well suited for regional exploration. Color images are composited from various combinations of the 6 spectral bands to selectively enhance geologic features in different types of terrain. SPOT images may be acquired as stereo pairs which are valuable for structural interpretations. Radar is an active form of remote sensing that provides its own source of energy at wavelengths of centimeters which penetrate cloud-cover. Radar images are acquired at low depression angles to create shadows and highlights that enhance subtle geologic features. Satellite radar images of earth were recorded from two U.S. space shuttle missions in the 1980s and are currently recorded by the European Remote Sensing satellite and the Japanese Earth Resources Satellite. Mosaics of radar images acquired from aircraft are widely used in oil exploration, especially in cloud-covered regions. Typical images and computer processing method are illustrated with examples from various frontier basins.

  7. Fractal-based image processing for mine detection

    NASA Astrophysics Data System (ADS)

    Nelson, Susan R.; Tuovila, Susan M.

    1995-06-01

    A fractal-based analysis algorithm has been developed to perform the task of automated recognition of minelike targets in side scan sonar images. Because naturally occurring surfaces, such as the sea bottom, are characterized by irregular textures they are well suited to modeling as fractal surfaces. Manmade structures, including mines, are composed of Euclidean shapes, which makes fractal-based analysis highly appropriate for discrimination of mines from a natural background. To that end, a set of fractal features, including fractal dimension, was developed to classify image areas as minelike targets, nonmine areas, or clutter. Four different methods of fractal dimension calculation were compared and the Weierstrass function was used to study the effect of various signal processing procedures on the fractal qualities of an image. The difference in fractal dimension between different images depends not only on the physical features extant in the images but in the underlying statistical characteristics of the processing procedures applied to the images and the underlying mathematical assumptions of the fractal dimension calculation methods. For the image set studied, fractal-based analysis achieved a classification rate similar to human operators, and was very successful in identifying areas of clutter. The analysis technique presented here is applicable to any type of signal that may be configured as an image, making this technique suitable for multisensor systems.

  8. Standardizing PhenoCam Image Processing and Data Products

    NASA Astrophysics Data System (ADS)

    Milliman, T. E.; Richardson, A. D.; Klosterman, S.; Gray, J. M.; Hufkens, K.; Aubrecht, D.; Chen, M.; Friedl, M. A.

    2014-12-01

    The PhenoCam Network (http://phenocam.unh.edu) contains an archive of imagery from digital webcams to be used for scientific studies of phenological processes of vegetation. The image archive continues to grow and currently has over 4.8 million images representing 850 site-years of data. Time series of broadband reflectance (e.g., red, green, blue, infrared bands) and derivative vegetation indices (e.g. green chromatic coordinate or GCC) are calculated for regions of interest (ROI) within each image series. These time series form the basis for subsequent analysis, such as spring and autumn transition date extraction (using curvature analysis techniques) and modeling the climate-phenology relationship. Processing is relatively straightforward but time consuming, with some sites having more than 100,000 images available. While the PhenoCam Network distributes the original image data, it is our goal to provide higher-level vegetation phenology products, generated in a standardized way, to encourage use of the data without the need to download and analyze individual images. We describe here the details of the standard image processing procedures, and also provide a description of the products that will be available for download. Products currently in development include an "all-image" file, which contains a statistical summary of the red, green and blue bands over the pixels in predefined ROI's for each image from a site. This product is used to generate 1-day and 3-day temporal aggregates with 90th percentile values of GCC for the specified time-periodwith standard image selection/filtering criteria applied. Sample software (in python, R, MATLAB) that can be used to read in and plot these products will also be described.

  9. Radon-Based Image Processing In A Parallel Pipeline Architecture

    NASA Astrophysics Data System (ADS)

    Hinkle, Eric B.; Sanz, Jorge L. C.; Jain, Anil K.

    1986-04-01

    This paper deals with a novel architecture that makes real-time projection-based algorithms a reality. The design is founded on raster-mode processing, which is exploited in a powerful and flexible pipeline. This architecture, dubbed "P3 E" ( Parallel Pipeline Projection Engine), supports a large variety of image processing and image analysis applications. The image processing applications include: discrete approximations of the Radon and inverse Radon transform, among other projection operators; CT reconstructions; 2-D convolutions; rotations and translations; discrete Fourier transform computations in polar coordinates; autocorrelations; etc. There is also an extensive list of key image analysis algorithms that are supported by P E, thus making it a profound and versatile tool for projection-based computer vision. These include: projections of gray-level images along linear patterns (the Radon transform) and other curved contours; generation of multi-color digital masks; convex hull approximations; Hough transform approximations for line and curve detection; diameter computations; calculations of moments and other principal components; etc. The effectiveness of our approach and the feasibility of the proposed architecture have been demonstrated by running some of these image analysis algorithms in conventional short pipelines, to solve some important automated inspection problems. In the present paper, we will concern ourselves with reconstructing images from their linear projections, and performing convolutions via the Radon transform.

  10. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  11. Computer image processing - The Viking experience. [digital enhancement techniques

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  12. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  13. Image Processing Techniques and Feature Recognition in Solar Physics

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    2010-04-01

    This review presents a comprehensive and systematic overview of image-processing techniques that are used in automated feature-detection algorithms applied to solar data: i) image pre-processing procedures, ii) automated detection of spatial features, iii) automated detection and tracking of temporal features (events), and iv) post-processing tasks, such as visualization of solar imagery, cataloguing, statistics, theoretical modeling, prediction, and forecasting. For each aspect the most recent developments and science results are highlighted. We conclude with an outlook on future trends.

  14. IDP: Image and data processing (software) in C++

    SciTech Connect

    Lehman, S.

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  15. Models of formation and some algorithms of hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

    2014-12-01

    Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

  16. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  17. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGESBeta

    Collette, R.; King, J.; Keiser, Jr., D.; Miller, B.; Madden, J.; Schulthess, J.

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  18. An ImageJ plugin for ion beam imaging and data processing at AIFIRA facility

    NASA Astrophysics Data System (ADS)

    Devès, G.; Daudin, L.; Bessy, A.; Buga, F.; Ghanty, J.; Naar, A.; Sommar, V.; Michelet, C.; Seznec, H.; Barberet, P.

    2015-04-01

    Quantification and imaging of chemical elements at the cellular level requires the use of a combination of techniques such as micro-PIXE, micro-RBS, STIM, secondary electron imaging associated with optical and fluorescence microscopy techniques employed prior to irradiation. Such a numerous set of methods generates an important amount of data per experiment. Typically for each acquisition the following data has to be processed: chemical map for each element present with a concentration above the detection limit, density and backscattered maps, mean and local spectra corresponding to relevant region of interest such as whole cell, intracellular compartment, or nanoparticles. These operations are time consuming, repetitive and as such could be source of errors in data manipulation. In order to optimize data processing, we have developed a new tool for batch data processing and imaging. This tool has been developed as a plugin for ImageJ, a versatile software for image processing that is suitable for the treatment of basic IBA data operations. Because ImageJ is written in Java, the plugin can be used under Linux, Mas OS X and Windows in both 32-bits and 64-bits modes, which may interest developers working on open-access ion beam facilities like AIFIRA. The main features of this plugin are presented here: listfile processing, spectroscopic imaging, local information extraction, quantitative density maps and database management using OMERO.

  19. Optimization and application of Retinex algorithm in aerial image processing

    NASA Astrophysics Data System (ADS)

    Sun, Bo; He, Jun; Li, Hongyu

    2008-04-01

    In this paper, we provide a segmentation based Retinex for improving the visual quality of aerial images obtained under complex weather conditions. With the method, an aerial image will be segmented into different regions, and then an adaptive Gaussian based on the segmentations will be used to process it. The method addresses the problems existing in previously developed Retinex algorithms, such as halo artifacts and graying-out artifacts. The experimental result also shows evidence of its better effect.

  20. Positron emission tomography provides molecular imaging of biological processes

    PubMed Central

    Phelps, Michael E.

    2000-01-01

    Diseases are biological processes, and molecular imaging with positron emission tomography (PET) is sensitive to and informative of these processes. This is illustrated by detection of biological abnormalities in neurological disorders with no computed tomography or MRI anatomic changes, as well as even before symptoms are expressed. PET whole body imaging in cancer provides the means to (i) identify early disease, (ii) differentiate benign from malignant lesions, (iii) examine all organs for metastases, and (iv) determine therapeutic effectiveness. Diagnostic accuracy of PET is 8–43% higher than conventional procedures and changes treatment in 20–40% of the patients, depending on the clinical question, in lung and colorectal cancers, melanoma, and lymphoma, with similar findings in breast, ovarian, head and neck, and renal cancers. A microPET scanner for mice, in concert with human PET systems, provides a novel technology for molecular imaging assays of metabolism and signal transduction to gene expression, from mice to patients: e.g., PET reporter gene assays are used to trace the location and temporal level of expression of therapeutic and endogenous genes. PET probes and drugs are being developed together—in low mass amounts, as molecular imaging probes to image the function of targets without disturbing them, and in mass amounts to modify the target's function as a drug. Molecular imaging by PET, optical technologies, magnetic resonance imaging, single photon emission tomography, and other technologies are assisting in moving research findings from in vitro biology to in vivo integrative mammalian biology of disease. PMID:10922074

  1. High Performance Image Processing And Laser Beam Recording System

    NASA Astrophysics Data System (ADS)

    Fanelli, Anthony R.

    1980-09-01

    The article is meant to provide the digital image recording community with an overview of digital image processing, and recording. The Digital Interactive Image Processing System (DIIPS) was assembled by ESL for Air Force Systems Command under ROME AIR DEVELOPMENT CENTER's guidance. The system provides the capability of mensuration and exploitation of digital imagery with both mono and stereo digital images as inputs. This development provided for system design, basic hardware, software and operational procedures to enable the Air Force's System Command photo analyst to perform digital mensuration and exploitation of stereo digital images as inputs. The engineering model was based on state-of-the-art technology and to the extent possible off-the-shelf hardware and software. A LASER RECORDER was also developed for the DIIPS Systems and is known as the Ultra High Resolution Image Recorder (UHRIR). The UHRIR is a prototype model that will enable the Air Force Systems Command to record computer enhanced digital image data on photographic film at high resolution with geometric and radiometric distortion minimized.

  2. Object silhouettes and surface directions through stereo matching image processing

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Kumagai, Hideo

    2015-09-01

    We have studied the object silhouettes and surface direction through the stereo matching image processing to recognize the position, size and surface direction of the object. For this study we construct the pixel number change distribution of the HSI color component level, the binary component level image by the standard deviation threshold, the 4 directional pixels connectivity filter, the surface elements correspondence by the stereo matching and the projection rule relation. We note that the HSI color component level change tendency of the object image near the focus position is more stable than the HSI color component level change tendency of the object image over the unfocused range. We use the HSI color component level images near the fine focused position to extract the object silhouette. We extract the object silhouette properly. We find the surface direction of the object by the pixel numbers of the correspondence surface areas and the projection cosine rule after the stereo matching image processing by the characteristic areas and the synthesized colors. The epipolar geometry is used in this study because a pair of imager is arranged on the same epipolar plane. The surface direction detection results in the proper angle calculation. The construction of the object silhouettes and the surface direction detection of the object are realized.

  3. Imaging of Cortical and White Matter Language Processing.

    PubMed

    Klein, Andrew P; Sabsevitz, David S; Ulmer, John L; Mark, Leighton P

    2015-06-01

    Although investigations into the functional and anatomical organization of language within the human brain began centuries ago, it is recent advanced imaging techniques including functional magnetic resonance imaging and diffusion tensor imaging that have helped propel our understanding forward at an unprecedented rate. Important cortical brain regions and white matter tracts in language processing subsystems including semantic, phonological, and orthographic functions have been identified. An understanding of functional and dysfunctional language anatomy is critical for practicing radiologists. This knowledge can be applied to routine neuroimaging examinations as well as to more advanced examinations such as presurgical brain mapping.

  4. Pre-analytic process control: projecting a quality image.

    PubMed

    Serafin, Mark D

    2006-01-01

    Within the health-care system, the term "ancillary department" often describes the laboratory. Thus, laboratories may find it difficult to define their image and with it, customer perception of department quality. Regulatory requirements give laboratories who so desire an elegant way to address image and perception issues--a comprehensive pre-analytic system solution. Since large laboratories use such systems--laboratory service manuals--I describe and illustrate the process for the benefit of smaller facilities. There exist resources to help even small laboratories produce a professional service manual--an elegant solution to image and customer perception of quality. PMID:17005095

  5. Crystallographic phase retrieval through image processing under constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Kam Y.

    1993-11-01

    The crystallographic image processing techniques of Sayre's equation, molecular averaging, solvent flattening and histogram matching are combined in an integrated procedure for macromolecular phase retrieval. It employs the constraints of the local shape of electron density, equal molecules, solvent flatness and correct electron density distribution. These constraints on electron density image are satisfied simultaneously by solving a system of non- linear equations using fast Fourier transform. The electron density image is further filtered under the constraint of observed diffraction amplitudes. The effect of each constraint on phase retrieval is examined. The constraints are found to work synergistically in phase retrieval. Test results on 2Zn insulin are presented.

  6. Rapid prototyping in the development of image processing systems

    NASA Astrophysics Data System (ADS)

    von der Fecht, Arno; Kelm, Claus Thomas

    2004-08-01

    This contribution presents a rapid prototyping approach for the real-time demonstration of image processing algorithms. As an example EADS/LFK has developed a basic IR target tracking system implementing this approach. Traditionally in research and industry time-independent simulation of image processing algorithms on a host computer is processed. This method is good for demonstrating the algorithms' capabilities. Rarely done is a time-dependent simulation or even a real-time demonstration on a target platform to prove the real-time capabilities. In 1D signal processing applications time-dependent simulation and real-time demonstration has already been used for quite a while. For time-dependent simulation Simulink from The MathWorks has established as an industry standard. Combined with The MathWorks' Real-Time Workshop the simulation model can be transferred to a real-time target processor. The executable is generated automatically by the Real-Time Workshop directly out of the simulation model. In 2D signal processing applications like image processing The Mathworks' Matlab is commonly used for time-independent simulation. To achieve time-dependent simulation and real-time demonstration capabilities the algorithms can be transferred to Simulink, which in fact runs on top of Matlab. Additionally to increase the performance Simulink models or parts of them can be transferred to Xilinx FPGAs using Xilinx' System Generator. With a single model and the automatic workflow both, a time-dependant simulation and the real-time demonstration, are covered leading to an easy and flexible rapid prototyping approach. EADS/LFK is going to use this approach for a wider spectrum of IR image processing applications like automatic target recognition or image based navigation or imaging laser radar target recognition.

  7. SENTINEL-2 Level 1 Products and Image Processing Performances

    NASA Astrophysics Data System (ADS)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km), a high revisit (5 days with two satellites), a high resolution (10 m, 20 m and 60 m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes), the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands); and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame. The

  8. Image data processing system requirements study. Volume 1: Analysis. [for Earth Resources Survey Program

    NASA Technical Reports Server (NTRS)

    Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    Digital image processing, image recorders, high-density digital data recorders, and data system element processing for use in an Earth Resources Survey image data processing system are studied. Loading to various ERS systems is also estimated by simulation.

  9. Real-time microstructural and functional imaging and image processing in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Westphal, Volker

    Optical Coherence Tomography (OCT) is a noninvasive optical imaging technique that allows high-resolution cross-sectional imaging of tissue microstructure, achieving a spatial resolution of about 10 mum. OCT is similar to B-mode ultrasound (US) except that it uses infrared light instead of ultrasound. In contrast to US, no coupling gel is needed, simplifying the image acquisition. Furthermore, the fiber optic implementation of OCT is compatible with endoscopes. In recent years, the transition from slow imaging, bench-top systems to real-time clinical systems has been under way. This has lead to a variety of applications, namely in ophthalmology, gastroenterology, dermatology and cardiology. First, this dissertation will demonstrate that OCT is capable of imaging and differentiating clinically relevant tissue structures in the gastrointestinal tract. A careful in vitro correlation study between endoscopic OCT images and corresponding histological slides was performed. Besides structural imaging, OCT systems were further developed for functional imaging, as for example to visualize blood flow. Previously, imaging flow in small vessels in real-time was not possible. For this research, a new processing scheme similar to real-time Doppler in US was introduced. It was implemented in dedicated hardware to allow real-time acquisition and overlayed display of blood flow in vivo. A sensitivity of 0.5mm/s was achieved. Optical coherence microscopy (OCM) is a variation of OCT, improving the resolution even further to a few micrometers. Advances made in the OCT scan engine for the Doppler setup enabled real-time imaging in vivo with OCM. In order to generate geometrical correct images for all the previous applications in real-time, extensive image processing algorithms were developed. Algorithms for correction of distortions due to non-telecentric scanning, nonlinear scan mirror movements, and refraction were developed and demonstrated. This has led to interesting new

  10. Digital image processing: a primer for JVIR authors and readers: Part 3: Digital image editing.

    PubMed

    LaBerge, Jeanne M; Andriole, Katherine P

    2003-12-01

    This is the final installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first two articles of the series, the fundamentals of digital image architecture were reviewed and methods of importing images to the computer desktop were described. In this article, techniques are presented for editing images in preparation for online submission. A step-by-step guide to basic editing with use of Adobe Photoshop is provided and the ethical implications of this activity are explored.

  11. Automated Processing of Zebrafish Imaging Data: A Survey

    PubMed Central

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  12. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation

  13. The Multimission Image Processing Laboratory's virtual frame buffer interface

    NASA Technical Reports Server (NTRS)

    Wolfe, T.

    1984-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

  14. A spatial planetary image database in the context of processing

    NASA Astrophysics Data System (ADS)

    Willner, K.; Tasdelen, E.

    2015-10-01

    Planetary image data is collected and archived by e.g. the European Planetary Science Archive (PSA) or its US counterpart the Planetary Data System (PDS). These archives usually organize the data according to missions and their respective instruments. Search queries can be posted to retrieve data of interest for a specific instrument data set. In the context of processing data of a number of sensors and missions this is not practical. In the scope of the EU FP7 project PRoViDE meta-data from imaging sensors were collected from PSA as well as PDS and were rearranged and restructured according to the processing needs. Exemplary image data gathered from rover and lander missions operated on the Martian surface was organized into a new unique data base. The data base is a core component of the PRoViDE processing and visualization system as it enables multi-mission and -sensor searches to fully exploit the collected data.

  15. Acousto-optic image processing in coherent light

    SciTech Connect

    Balakshy, V I; Voloshinov, V B

    2005-01-31

    The results of recent studies on coherent acousto-optic image processing performed at the chair of physics of oscillations at the Department of Physics of Moscow State University are reported. It is shown that this processing method is based on the filtration of the spatial spectrum of an optical signal in an acousto-optic cell. The main attention is paid to the analysis of the dependence of the transfer function of the cell on the crystal cut, geometry of acousto-optic interaction, and acoustic-wave parameters. It is shown that an acousto-optic cell allows the image differentiation and integration as well as the visualisation of phase objects. The results of experiments and computer simulation are presented which illustrate the possibilities of acousto-optic image processing. (laser applications and other topics in quantum electronics)

  16. Special Software for Planetary Image Processing and Research

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.

    2016-06-01

    The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).

  17. An image processing approach to analyze morphological features of microscopic images of muscle fibers

    PubMed Central

    Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; da Fontoura Costa, Luciano; Yang, Zhong

    2016-01-01

    We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. PMID:25124286

  18. An image processing approach to analyze morphological features of microscopic images of muscle fibers.

    PubMed

    Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; Costa, Luciano da Fontoura; Yang, Zhong

    2014-12-01

    We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness.

  19. Optical Fourier techniques for medical image processing and phase contrast imaging.

    PubMed

    Yelleswarapu, Chandra S; Kothapalli, Sri-Rajasekhar; Rao, D V G L N

    2008-04-01

    This paper briefly reviews the basics of optical Fourier techniques (OFT) and applications for medical image processing as well as phase contrast imaging of live biological specimens. Enhancement of microcalcifications in a mammogram for early diagnosis of breast cancer is the main focus. Various spatial filtering techniques such as conventional 4f filtering using a spatial mask, photoinduced polarization rotation in photosensitive materials, Fourier holography, and nonlinear transmission characteristics of optical materials are discussed for processing mammograms. We also reviewed how the intensity dependent refractive index can be exploited as a phase filter for phase contrast imaging with a coherent source. This novel approach represents a significant advance in phase contrast microscopy.

  20. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  1. ISLE (Image and Signal Processing LISP Environment) reference manual

    SciTech Connect

    Sherwood, R.J.; Searfus, R.M.

    1990-01-01

    ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply the algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.

  2. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  3. A synoptic description of coal basins via image processing

    NASA Technical Reports Server (NTRS)

    Farrell, K. W., Jr.; Wherry, D. B.

    1978-01-01

    An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology.

  4. 157km BOTDA with pulse coding and image processing

    NASA Astrophysics Data System (ADS)

    Qian, Xianyang; Wang, Zinan; Wang, Song; Xue, Naitian; Sun, Wei; Zhang, Li; Zhang, Bin; Rao, Yunjiang

    2016-05-01

    A repeater-less Brillouin optical time-domain analyzer (BOTDA) with 157.68km sensing range is demonstrated, using the combination of random fiber laser Raman pumping and low-noise laser-diode-Raman pumping. With optical pulse coding (OPC) and Non Local Means (NLM) image processing, temperature sensing with +/-0.70°C uncertainty and 8m spatial resolution is experimentally demonstrated. The image processing approach has been proved to be compatible with OPC, and it further increases the figure-of-merit (FoM) of the system by 57%.

  5. Image processing method for multicore fiber geometric parameters

    NASA Astrophysics Data System (ADS)

    Zhang, Chuanbiao; Ning, Tigang; Li, Jing; Li, Chao; Ma, Shaoshuo

    2016-05-01

    An image processing method has been developed to obtain multicore fiber geometric parameters. According to the characteristics of multicore fiber, we using MATLAB to processing the sectional view of the multicore fiber (MCF), and the algorithm mainly concludes the following steps: filter out image noise, edge detection, use an appropriate threshold for boundary extraction and an improved curve-fitting algorithm for reconstruction the cross section, then we get the relative geometric parameters of the MCF in pixels. We also compares different edge detection operator and analyzes each detection results, which can provide a meaningful reference for edge detection.

  6. Influence of chemical processing on the imaging properties of microlenses

    NASA Astrophysics Data System (ADS)

    Vasiljević, Darko; Murić, Branka; Pantelić, Dejan; Panić, Bratimir

    2009-07-01

    Microlenses are produced by irradiation of a layer of tot'hema and eosin sensitized gelatin (TESG) by using a laser beam (Nd:YAG 2nd harmonic; 532 nm). All the microlenses obtained are concave with a parabolic profile. After the production, the microlenses are chemically processed with various concentrations of alum. The following imaging properties of microlenses were calculated and analyzed: the root mean square (rms) wavefront aberration, the geometric encircled energy and the spot diagram. The microlenses with higher concentrations of alum in solution had a greater effective focal length and better image quality. The microlenses chemically processed with 10% alum solution had near-diffraction-limited performance.

  7. Instructional image processing on a university mainframe: The Kansas system

    NASA Technical Reports Server (NTRS)

    Williams, T. H. L.; Siebert, J.; Gunn, C.

    1981-01-01

    An interactive digital image processing program package was developed that runs on the University of Kansas central computer, a Honeywell Level 66 multi-processor system. The module form of the package allows easy and rapid upgrades and extensions of the system and is used in remote sensing courses in the Department of Geography, in regional five-day short courses for academics and professionals, and also in remote sensing projects and research. The package comprises three self-contained modules of processing functions: Subimage extraction and rectification; image enhancement, preprocessing and data reduction; and classification. Its use in a typical course setting is described. Availability and costs are considered.

  8. A new programming metaphor for image processing procedures

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and

  9. High Throughput Multispectral Image Processing with Applications in Food Science.

    PubMed

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples. PMID:26466349

  10. High Throughput Multispectral Image Processing with Applications in Food Science.

    PubMed

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  11. Eclipse: ESO C Library for an Image Processing Software Environment

    NASA Astrophysics Data System (ADS)

    Devillard, Nicolas

    2011-12-01

    Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

  12. Remotely sensed image processing service composition based on heuristic search

    NASA Astrophysics Data System (ADS)

    Yang, Xiaoxia; Zhu, Qing; Li, Hai-feng; Zhao, Wen-hao

    2008-12-01

    As remote sensing technology become ever more powerful with multi-platform and multi-sensor, it has been widely recognized for contributing to geospatial information efforts. Because the remotely sensed image processing demands large-scale, collaborative processing and massive storage capabilities to satisfy the increasing demands of various applications, the effect and efficiency of the remotely sensed image processing is far from the user's expectation. The emergence of Service Oriented Architecture (SOA) may make this challenge manageable. It encapsulate all processing function into services and recombine them with service chain. The service composition on demand has become a hot topic. Aiming at the success rate, quality and efficiency of processing service composition for remote sensing application, a remote sensed image processing service composition method is proposed in this paper. It composes services for a user requirement through two steps: 1) dynamically constructs a complete service dependency graph for user requirement on-line; 2) AO* based heuristic searches for optimal valid path in service dependency graph. These services within the service dependency graph are considered relevant to the specific request, instead of overall registered services. The second step, heuristic search is a promising approach for automated planning. Starting with the initial state, AO* uses a heuristic function to select states until the user requirement is reached. Experimental results show that this method has a good performance even the repository has a large number of processing services.

  13. Advanced 3-D analysis, client-server systems, and cloud computing—Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement

    PubMed Central

    Zimmermann, Mathis; Falkner, Juergen

    2013-01-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR. PMID:24282750

  14. Hyperconnections and hierarchical representations for grayscale and multiband image processing.

    PubMed

    Perret, Benjamin; Lefevre, Sébastien; Collet, Christophe; Slezak, Eric

    2012-01-01

    Connections in image processing are an important notion that describes how pixels can be grouped together according to their spatial relationships and/or their gray-level values. In recent years, several works were devoted to the development of new theories of connections among which hyperconnection (h-connection) is a very promising notion. This paper addresses two major issues of this theory. First, we propose a new axiomatic that ensures that every h-connection generates decompositions that are consistent for image processing and, more precisely, for the design of h-connected filters. Second, we develop a general framework to represent the decomposition of an image into h-connections as a tree that corresponds to the generalization of the connected component tree. Such trees are indeed an efficient and intuitive way to design attribute filters or to perform detection tasks based on qualitative or quantitative attributes. These theoretical developments are applied to a particular fuzzy h-connection, and we test this new framework on several classical applications in image processing, i.e., segmentation, connected filtering, and document image binarization. The experiments confirm the suitability of the proposed approach: It is robust to noise, and it provides an efficient framework to design selective filters.

  15. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  16. Automatic Evaluation of Welded Joints Using Image Processing on Radiographs

    NASA Astrophysics Data System (ADS)

    Schwartz, Ch.

    2003-03-01

    Radiography is frequently used to detect discontinuities in welded joints (porosity, cracks, lack of penetration). Perfect knowledge of the geometry of these defects is an important step which is essential to appreciate the quality of the weld. Because of this, an action improving the interpretation of radiographs by image processing has been undertaken. The principle consists in making a radiograph of the welded joint and of a depth step wedge penetrameter in the material. The radiograph is then finely digitized and an automatic processing of the radiograph of the penetrameter image allows the establishment of a correspondence between grey levels and material thickness. An algorithm based on image processing is used to localize defects in the welded joints and to isolate them from the original image. First, defects detected by this method are characterized in terms of dimension and equivalent thickness. Then, from the image of the healthy welded joint (that is to say without the detected defects), characteristic values of the weld are evaluated (thickness reduction, width).

  17. An image-processing program for automated counting

    USGS Publications Warehouse

    Cunningham, D.J.; Anderson, W.H.; Anthony, R.M.

    1996-01-01

    An image-processing program developed by the National Institute of Health, IMAGE, was modified in a cooperative project between remote sensing specialists at the Ohio State University Center for Mapping and scientists at the Alaska Science Center to facilitate estimating numbers of black brant (Branta bernicla nigricans) in flocks at Izembek National Wildlife Refuge. The modified program, DUCK HUNT, runs on Apple computers. Modifications provide users with a pull down menu that optimizes image quality; identifies objects of interest (e.g., brant) by spectral, morphometric, and spatial parameters defined interactively by users; counts and labels objects of interest; and produces summary tables. Images from digitized photography, videography, and high- resolution digital photography have been used with this program to count various species of waterfowl.

  18. Image-plane processing for improved computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  19. Infective endocarditis detection through SPECT/CT images digital processing

    NASA Astrophysics Data System (ADS)

    Moreno, Albino; Valdés, Raquel; Jiménez, Luis; Vallejo, Enrique; Hernández, Salvador; Soto, Gabriel

    2014-03-01

    Infective endocarditis (IE) is a difficult-to-diagnose pathology, since its manifestation in patients is highly variable. In this work, it was proposed a semiautomatic algorithm based on SPECT images digital processing for the detection of IE using a CT images volume as a spatial reference. The heart/lung rate was calculated using the SPECT images information. There were no statistically significant differences between the heart/lung rates values of a group of patients diagnosed with IE (2.62+/-0.47) and a group of healthy or control subjects (2.84+/-0.68). However, it is necessary to increase the study sample of both the individuals diagnosed with IE and the control group subjects, as well as to improve the images quality.

  20. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    PubMed

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |< 0.083 mm for translations and |mu| < 0.023 degrees for rotations. The precision sigma in x-, y-, and z-direction was 0.090, 0.077, and 0.220 mm for translations and 0.155 degrees , 0.243 degrees , and 0.074 degrees for rotations. Our results show that the accuracy and precision of in vitro IBRSA, performed under ideal laboratory conditions, are lower than in vitro standard RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications.

  1. Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

  2. Quantification of chromatin condensation level by image processing.

    PubMed

    Irianto, Jerome; Lee, David A; Knight, Martin M

    2014-03-01

    The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation.

  3. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  4. High performance medical image processing in client/server-environments.

    PubMed

    Mayer, A; Meinzer, H P

    1999-03-01

    As 3D scanning devices like computer tomography (CT) or magnetic resonance imaging (MRI) become more widespread, there is also an increasing need for powerful computers that can handle the enormous amounts of data with acceptable response times. We describe an approach to parallelize some of the more frequently used image processing operators on distributed memory architectures. It is desirable to make such specialized machines accessible on a network, in order to save costs by sharing resources. We present a client/server approach that is specifically tailored to the interactive work with volume data. Our image processing server implements a volume visualization method that allows the user to assess the segmentation of anatomical structures. We can enhance the presentation by combining the volume visualizations on a viewing station with additional graphical elements, which can be manipulated in real-time. The methods presented were verified on two applications for different domains. PMID:10094225

  5. Towards a Platform for Image Acquisition and Processing on RASTA

    NASA Astrophysics Data System (ADS)

    Furano, Gianluca; Guettache, Farid; Magistrati, Giorgio; Tiotto, Gabriele

    2013-08-01

    This paper presents the architecture of a platform for image acquisition and processing based on commercial hardware and space qualified hardware. The aim is to extend the Reference Architecture Test-bed for Avionics (RASTA) system in order to obtain a Test-bed that allows testing different hardware and software solutions in the field of image acquisition and processing. The platform will allow the integration of space qualified hardware and Commercial Off The Shelf (COTS) hardware in order to test different architectural configurations. The first implementation is being performed on a low cost commercial board and on the GR712RC board based on the Dual Core Leon3 fault tolerant processor. The platform will include an actuation module with the aim of implementing a complete pipeline from image acquisition to actuation, making possible the simulation of a real case scenario involving acquisition and actuation.

  6. Processing techniques for digital sonar images from GLORIA.

    USGS Publications Warehouse

    Chavez, P.S.

    1986-01-01

    Image processing techniques have been developed to handle data from one of the newest members of the remote sensing family of digital imaging systems. This paper discusses software to process data collected by the GLORIA (Geological Long Range Inclined Asdic) sonar imaging system, designed and built by the Institute of Oceanographic Sciences (IOS) in England, to correct for both geometric and radiometric distortions that exist in the original 'raw' data. Preprocessing algorithms that are GLORIA-specific include corrections for slant-range geometry, water column offset, aspect ratio distortion, changes in the ship's velocity, speckle noise, and shading problems caused by the power drop-off which occurs as a function of range.-from Author

  7. IPL processing of the Mariner 10 images of Mercury

    NASA Technical Reports Server (NTRS)

    Soha, J. M.; Lynn, D. J.; Lorre, J. J.; Mosher, J. A.; Thayer, N. N.; Elliott, D. A.; Benton, W. D.; Dewar, R. E.

    1975-01-01

    This paper describes the digital processing performed on the images of Mercury returned to earth from Mariner 10. Each image contains considerably more information than can be displayed in a single picture. Several specialized processing techniques and procedures are utilized to display the particular information desired for specific scientific analyses: radiometric decalibration for photometric investigations, high-pass filtering to characterize morphology, modulation transfer function restoration to provide the highest possible resolution, scene-dependent filtering of the terminator images to provide maximum feature discriminability in the regions of low illumination, and rectification to cartographic projections to provide known geometric relationships between features. A principal task was the construction of full disk mosaics as an aid to the understanding of surface structure on a global scale.

  8. Image processing in the BRITE nano-satellite mission

    NASA Astrophysics Data System (ADS)

    Popowicz, Adam

    2016-07-01

    The BRITE nano-satellite mission is an international Austrian-Canadian-Polish project of six small space tele- scopes measuring photometric variability of the brightest stars in the sky. Due to the limited space onboard and the weight constraints, the CCD detectors are poorly shielded and suffer from proton impact. Shortly after the launch, various CCD defects emerged, producing various sources of impulsive noise in the images. In this paper, the methods of BRITE data-processing are described and their efficiency evaluated. The proposed algorithm, developed by the BRITE photometric team, consists of three main parts: (1) image classification, (2) image processing with aperture photometry and (3) tunable optimization of parameters. The presented pipeline allows one to achieve milli-magnitude precision in photometry. Some first scientific results of the mission have just been published.

  9. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  10. Collection and processing data for high quality CCD images.

    SciTech Connect

    Doerry, Armin Walter

    2007-03-01

    Coherent Change Detection (CCD) with Synthetic Aperture Radar (SAR) images is a technique whereby very subtle temporal changes can be discerned in a target scene. However, optimal performance requires carefully matching data collection geometries and adjusting the processing to compensate for imprecision in the collection geometries. Tolerances in the precision of the data collection are discussed, and anecdotal advice is presented for optimum CCD performance. Processing considerations are also discussed.

  11. Image processing for a high-resolution optoelectronic retinal prosthesis.

    PubMed

    Asher, Alon; Segal, William A; Baccus, Stephen A; Yaroslavsky, Leonid P; Palanker, Daniel V

    2007-06-01

    In an effort to restore visual perception in retinal diseases such as age-related macular degeneration or retinitis pigmentosa, a design was recently presented for a high-resolution optoelectronic retinal prosthesis having thousands of electrodes. This system requires real-time image processing fast enough to convert a video stream of images into electrical stimulus patterns that can be properly interpreted by the brain. Here, we present image-processing and tracking algorithms for a subretinal implant designed to stimulate the second neuron in the visual pathway, bypassing the degenerated first synaptic layer. For this task, we have developed and implemented: 1) A tracking algorithm that determines the implant's position in each frame. 2) Image cropping outside of the implant boundaries. 3) A geometrical transformation that distorts the image appropriate to the geometry of the fovea. 4) Spatio-temporal image filtering to reproduce the visual processing normally occurring in photoceptors and at the photoreceptor-bipolar cell synapse. 5) Conversion of the filtered visual information into a pattern of electrical current. Methods to accelerate real-time transformations include the exploitation of data redundancy in the time domain, and the use of precomputed lookup tables that are adjustable to retinal physiology and allow flexible control of stimulation parameters. A software implementation of these algorithms processes natural visual scenes with sufficient speed for real-time operation. This computationally efficient algorithm resembles, in some aspects, biological strategies of efficient coding in the retina and could provide a refresh rate higher than fifty frames per second on our system.

  12. Parallel asynchronous hardware implementation of image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  13. Problem analysis of image processing in two-axis autocollimator

    NASA Astrophysics Data System (ADS)

    Nogin, A.; Konyakhin, I.

    2016-08-01

    The article deals with image processing algorithms in the analysis plane of an angle measuring two-axis autocollimator, which uses a reflector in the form of a quadrangular pyramid. This algorithm uses Hough transform and the method of weighted summation. The proposed algorithm can reduce the area of nonoperability and open up new possibilities for this class of devices.

  14. High Throughput Multispectral Image Processing with Applications in Food Science

    PubMed Central

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing’s outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples. PMID:26466349

  15. Model control of image processing for telerobotics and biomedical instrumentation

    NASA Astrophysics Data System (ADS)

    Nguyen, An Huu

    1993-06-01

    This thesis has model control of image processing (MCIP) as its major theme. By this it is meant that there is a top-down model approach which already knows the structure of the image to be processed. This top-down image processing under model control is used further as visual feedback to control robots and as feedforward information for biomedical instrumentation. The software engineering of the bioengineering instrumentation image processing is defined in terms of the task and the tools available. Early bottom-up image processing such as thresholding occurs only within the top-down control regions of interest (ROI's) or operating windows. Moment computation is an important bottom-up procedure as well as pyramiding to attain rapid computation, among other considerations in attaining programming efficiencies. A distinction is made between initialization procedures and stripped down run time operations. Even more detailed engineering design considerations are considered with respect to the ellipsoidal modeling of objects. Here the major axis orientation is an important additional piece of information, beyond the centroid moments. Careful analysis of various sources of errors and considerable benchmarking characterized the detailed considerations of the software engineering of the image processing procedures. Image processing for robotic control involves a great deal of 3D calibration of the robot working environment (RWE). Of special interest is the idea of adapting the machine scanpath to the current task. It was important to pay careful attention to the hardware aspects of the control of the toy robots that were used to demonstrate the general methodology. It was necessary to precalibrate the open loop gains for all motors so that after initialization the visual feedback, which depends on MCIP, would be able to supply enough information quickly enough to the control algorithms to govern the robots under a variety of control configurations and task operations

  16. Instant super-resolution imaging in live cells and embryos via analog image processing

    PubMed Central

    York, Andrew G.; Chandris, Panagiotis; Nogare, Damian Dalle; Head, Jeffrey; Wawrzusin, Peter; Fischer, Robert S.; Chitnis, Ajay; Shroff, Hari

    2013-01-01

    Existing super-resolution fluorescence microscopes compromise acquisition speed to provide subdiffractive sample information. We report an analog implementation of structured illumination microscopy that enables 3D super-resolution imaging with 145 nm lateral and 350 nm axial resolution, at acquisition speeds up to 100 Hz. By performing image processing operations optically instead of digitally, we removed the need to capture, store, and combine multiple camera exposures, increasing data acquisition rates 10–100x over other super-resolution microscopes and acquiring and displaying super-resolution images in real-time. Low excitation intensities allow imaging over hundreds of 2D sections, and combined physical and computational sectioning allow similar depth penetration to confocal microscopy. We demonstrate the capability of our system by imaging fine, rapidly moving structures including motor-driven organelles in human lung fibroblasts and the cytoskeleton of flowing blood cells within developing zebrafish embryos. PMID:24097271

  17. Recent developments at JPL in the application of digital image processing techniques to astronomical images

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.; Lynn, D. J.; Benton, W. D.

    1976-01-01

    Several techniques of a digital image-processing nature are illustrated which have proved useful in visual analysis of astronomical pictorial data. Processed digital scans of photographic plates of Stephans Quintet and NGC 4151 are used as examples to show how faint nebulosity is enhanced by high-pass filtering, how foreground stars are suppressed by linear interpolation, and how relative color differences between two images recorded on plates with different spectral sensitivities can be revealed by generating ratio images. Analyses are outlined which are intended to compensate partially for the blurring effects of the atmosphere on images of Stephans Quintet and to obtain more detailed information about Saturn's ring structure from low- and high-resolution scans of the planet and its ring system. The employment of a correlation picture to determine the tilt angle of an average spectral line in a low-quality spectrum is demonstrated for a section of the spectrum of Uranus.

  18. Orthogonal rotation-invariant moments for digital image processing.

    PubMed

    Lin, Huibao; Si, Jennie; Abousleman, Glen P

    2008-03-01

    Orthogonal rotation-invariant moments (ORIMs), such as Zernike moments, are introduced and defined on a continuous unit disk and have been proven powerful tools in optics applications. These moments have also been digitized for applications in digital image processing. Unfortunately, digitization compromises the orthogonality of the moments and, therefore, digital ORIMs are incapable of representing subtle details in images and cannot accurately reconstruct images. Typical approaches to alleviate the digitization artifact can be divided into two categories: 1) careful selection of a set of pixels as close approximation to the unit disk and using numerical integration to determine the ORIM values, and 2) representing pixels using circular shapes such that they resemble that of the unit disk and then calculating ORIMs in polar space. These improvements still fall short of preserving the orthogonality of the ORIMs. In this paper, in contrast to the previous methods, we propose a different approach of using numerical optimization techniques to improve the orthogonality. We prove that with the improved orthogonality, image reconstruction becomes more accurate. Our simulation results also show that the optimized digital ORIMs can accurately reconstruct images and can represent subtle image details. PMID:18270118

  19. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  20. Computed tomography perfusion imaging denoising using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

  1. Quantitative evaluation of phase processing approaches in susceptibility weighted imaging

    NASA Astrophysics Data System (ADS)

    Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.

    2012-03-01

    Susceptibility weighted imaging (SWI) takes advantage of the local variation in susceptibility between different tissues to enable highly detailed visualization of the cerebral venous system and sensitive detection of intracranial hemorrhages. Thus, it has been increasingly used in magnetic resonance imaging studies of traumatic brain injury as well as other intracranial pathologies. In SWI, magnitude information is combined with phase information to enhance the susceptibility induced image contrast. Because of global susceptibility variations across the image, the rate of phase accumulation varies widely across the image resulting in phase wrapping artifacts that interfere with the local assessment of phase variation. Homodyne filtering is a common approach to eliminate this global phase variation. However, filter size requires careful selection in order to preserve image contrast and avoid errors resulting from residual phase wraps. An alternative approach is to apply phase unwrapping prior to high pass filtering. A suitable phase unwrapping algorithm guarantees no residual phase wraps but additional computational steps are required. In this work, we quantitatively evaluate these two phase processing approaches on both simulated and real data using different filters and cutoff frequencies. Our analysis leads to an improved understanding of the relationship between phase wraps, susceptibility effects, and acquisition parameters. Although homodyne filtering approaches are faster and more straightforward, phase unwrapping approaches perform more accurately in a wider variety of acquisition scenarios.

  2. Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6

    NASA Technical Reports Server (NTRS)

    Lee, George

    1993-01-01

    A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored.

  3. Qualitative and quantitative interpretation of SEM image using digital image processing.

    PubMed

    Saladra, Dawid; Kopernik, Magdalena

    2016-10-01

    The aim of the this study is improvement of qualitative and quantitative analysis of scanning electron microscope micrographs by development of computer program, which enables automatic crack analysis of scanning electron microscopy (SEM) micrographs. Micromechanical tests of pneumatic ventricular assist devices result in a large number of micrographs. Therefore, the analysis must be automatic. Tests for athrombogenic titanium nitride/gold coatings deposited on polymeric substrates (Bionate II) are performed. These tests include microshear, microtension and fatigue analysis. Anisotropic surface defects observed in the SEM micrographs require support for qualitative and quantitative interpretation. Improvement of qualitative analysis of scanning electron microscope images was achieved by a set of computational tools that includes binarization, simplified expanding, expanding, simple image statistic thresholding, the filters Laplacian 1, and Laplacian 2, Otsu and reverse binarization. Several modifications of the known image processing techniques and combinations of the selected image processing techniques were applied. The introduced quantitative analysis of digital scanning electron microscope images enables computation of stereological parameters such as area, crack angle, crack length, and total crack length per unit area. This study also compares the functionality of the developed computer program of digital image processing with existing applications. The described pre- and postprocessing may be helpful in scanning electron microscopy and transmission electron microscopy surface investigations.

  4. Image processing of correlated data by experimental design techniques

    SciTech Connect

    Stern, D.

    1987-01-01

    New classes of algorithms are developed for processing of two-dimensional image data imbedded in correlated noise. The algorithms are based on modifications of standard analysis of variance (ANOVA) techniques ensuring their proper operation in dependent noise. The approach taken in the development of procedures is deductive. First, the theory of modified ANOVA (MANOVA) techniques involving one- and two-way layouts are considered for noise models with autocorrelation matrix (ACM) formed by direct multiplication of rows and columns or tensored correlation matrices (TCM) stressing the special case of the first-order Markov process. Next, the techniques are generalized to include arbitrary, wide-sense stationary (WSS) processes. This permits dealing with diagonal masks which have ACM of a general form even for TCM. As further extension, the theory of Latin square (LS) masks is generalized to include dependent noise with TCM. This permits dealing with three different effects of m levels using only m{sup 2} observations rather than m{sup 3}. Since in many image-processing problems, replication of data is possible, the masking techniques are generalized to replicated data for which the replication is TCM dependent. For all procedures developed, algorithms are implemented which ensure real-time processing of images.

  5. RegiStax: Alignment, stacking and processing of images

    NASA Astrophysics Data System (ADS)

    Berrevoets, Cor; DeClerq, Bart; George, Tony; Makolkin, Dmitry; Maxson, Paul; Pilz, Bob; Presnyakov, Pavel; Roel, Eric; Weiller, Sylvain

    2012-06-01

    RegiStax is software for alignment/stacking/processing of images; it was released over 10 years ago and continues to be developed and improved. The current version is RegiStax 6, which supports the following formats: AVI, SER, RFL (RegiStax Framelist), BMP, JPG, TIF, and FIT. This version has a shorter and simpler processing sequence than its predecessor, and optimizing isn't necessary anymore as a new image alignment method optimizes directly. The interface of RegiStax 6 has been simplified to look more uniform in appearance and functionality, and RegiStax 6 now uses Multi-core processing, allowing the user to have up to have multiple cores(recommended to use maximally 4) working simultaneous during alignment/stacking.

  6. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  7. Design of multichannel image processing on the Space Solar Telescope

    NASA Astrophysics Data System (ADS)

    Zhang, Bin

    2000-07-01

    The multi-channel image processing system on the Space Solar Telescope (SST) is described in this paper. This system is main part of science data unit (SDU), which is designed for dealing with the science data from every payload on the SST. First every payload on the SST and its scientific objective are introduced. They are main optic telescope, four soft X- ray telescopes, an H-alpha and white light (full disc) telescope, a coronagraph, a wide band X-ray and Gamma-ray spectrometer, and a solar and interplanetary radio spectrometer. Then the structure of SDU is presented. In this part, we discuss the hardware and software structure of SDU, which is designed for multi-payload. The science data scream of every payload is summarized, too. Solar magnetic and velocity field processing that occupies more than 90% of the data processing of SDU is discussed, which includes polarizing unit, image receiver and image adding unit. Last the plan of image data compression and mass memory that is designed for science data storage are presented.

  8. Dynamic phase imaging and processing of moving biological organisms

    NASA Astrophysics Data System (ADS)

    Creath, Katherine; Goldstein, Goldie

    2012-03-01

    This paper describes recent advances in developing a new, novel interference Linnik microscope system and presents images and data of live biological samples. The specially designed optical system enables instantaneous 4-dimensional video measurements of dynamic motions within and among live cells without the need for contrast agents. "Label-free" measurements of biological objects in reflection using harmless light levels are possible without the need for scanning and vibration isolation. This instrument utilizes a pixelated phase mask enabling simultaneous measurement of multiple interference patterns taking advantage of the polarization properties of light enabling phase image movies in real time at video rates to track dynamic motions and volumetric changes. Optical thickness data are derived from phase images after processing to remove the background surface shape to quantify changes in cell position and volume. Data from a number of different pond organisms will be presented, as will measurements of human breast cancer cells with the addition of various agents that break down the cells. These data highlight examples of the image processing involved and the monitoring of different biological processes.

  9. Hyperspectral image representation and processing with binary partition trees.

    PubMed

    Valero, Silvia; Salembier, Philippe; Chanussot, Jocelyn

    2013-04-01

    The optimal exploitation of the information provided by hyperspectral images requires the development of advanced image-processing tools. This paper proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation relying on the binary partition tree (BPT). This hierarchical region-based representation can be interpreted as a set of hierarchical regions stored in a tree structure. Hence, the BPT succeeds in presenting: 1) the decomposition of the image in terms of coherent regions, and 2) the inclusion relations of the regions in the scene. Based on region-merging techniques, the BPT construction is investigated by studying the hyperspectral region models and the associated similarity metrics. Once the BPT is constructed, the fixed tree structure allows implementing efficient and advanced application-dependent techniques on it. The application-dependent processing of BPT is generally implemented through a specific pruning of the tree. In this paper, a pruning strategy is proposed and discussed in a classification context. Experimental results on various hyperspectral data sets demonstrate the interest and the good performances of the BPT representation.

  10. Image processing for safety assessment in civil engineering.

    PubMed

    Ferrer, Belen; Pomares, Juan C; Irles, Ramon; Espinosa, Julian; Mas, David

    2013-06-20

    Behavior analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in civil engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers.

  11. Lunar and Planetary Science XXXV: Image Processing and Earth Observations

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The titles in this section include: 1) Expansion in Geographic Information Services for PIGWAD; 2) Modernization of the Integrated Software for Imagers and Spectrometers; 3) Science-based Region-of-Interest Image Compression; 4) Topographic Analysis with a Stereo Matching Tool Kit; 5) Central Avra Valley Storage and Recovery Project (CAVSARP) Site, Tucson, Arizona: Floodwater and Soil Moisture Investigations with Extraterrestrial Applications; 6) ASE Floodwater Classifier Development for EO-1 HYPERION Imagery; 7) Autonomous Sciencecraft Experiment (ASE) Operations on EO-1 in 2004; 8) Autonomous Vegetation Cover Scene Classification of EO-1 Hyperion Hyperspectral Data; 9) Long-Term Continental Areal Reduction Produced by Tectonic Processes.

  12. Application of digital image processing techniques to astronomical imagery, 1979

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1979-01-01

    Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.

  13. Slide Set: reproducible image analysis and batch processing with ImageJ

    PubMed Central

    Nanes, Benjamin A.

    2015-01-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets that are common in biology. This paper introduces Slide Set, a framework for reproducible image analysis and batch processing with ImageJ. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution. PMID:26554504

  14. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  15. Amateur Image Pipeline Processing using Python plus PyRAF

    NASA Astrophysics Data System (ADS)

    Green, Wayne

    2012-05-01

    A template pipeline spanning observing planning to publishing is offered as a basis for establishing a long term observing program. The data reduction pipeline encapsulates all policy and procedures, providing an accountable framework for data analysis and a teaching framework for IRAF. This paper introduces the technical details of a complete pipeline processing environment using Python, PyRAF and a few other languages. The pipeline encapsulates all processing decisions within an auditable framework. The framework quickly handles the heavy lifting of image processing. It also serves as an excellent teaching environment for astronomical data management and IRAF reduction decisions.

  16. The Accuratre Signal Model and Imaging Processing in Geosynchronous SAR

    NASA Astrophysics Data System (ADS)

    Hu, Cheng

    With the development of synthetic aperture radar (SAR) application, the disadvantage of low earth orbit (LEO) SAR becomes more and more apparent. The increase of orbit altitude can shorten the revisit time and enlarge the coverage area in single look, and then satisfy the application requirement. The concept of geosynchronous earth orbit (GEO) SAR system is firstly presented and deeply discussed by K.Tomiyasi and other researchers. A GEO SAR, with its fine temporal resolution, would overcome the limitations of current imaging systems, allowing dense interpretation of transient phenomena as GPS time-series analysis with a spatial density several orders of magnitude finer. Until now, the related literatures about GEO SAR are mainly focused in the system parameter design and application requirement. As for the signal characteristic, resolution calculation and imaging algorithms, it is nearly blank in the related literatures of GEO SAR. In the LEO SAR, the signal model analysis adopts the `Stop-and-Go' assumption in general, and this assumption can satisfy the imaging requirement in present advanced SAR system, such as TerraSAR, Radarsat2 and so on. However because of long propagation distance and non-negligible earth rotation, the `Stop-and-Go' assumption does not exist and will cause large propagation distance error, and then affect the image formation. Furthermore the long propagation distance will result in the long synthetic aperture time such as hundreds of seconds, therefore the linear trajectory model in LEO SAR imaging will fail in GEO imaging, and the new imaging model needs to be proposed for the GEO SAR imaging processing. In this paper, considering the relative motion between satellite and earth during signal propagation time, the accurate analysis method for propagation slant range is firstly presented. Furthermore, the difference between accurate analysis method and `Stop-and-Go' assumption is analytically obtained. Meanwhile based on the derived

  17. Fast Implementation of Matched Filter Based Automatic Alignment Image Processing

    SciTech Connect

    Awwal, A S; Rice, K; Taha, T

    2008-04-02

    Video images of laser beams imprinted with distinguishable features are used for alignment of 192 laser beams at the National Ignition Facility (NIF). Algorithms designed to determine the position of these beams enable the control system to perform the task of alignment. Centroiding is a common approach used for determining the position of beams. However, real world beam images suffer from intensity fluctuation or other distortions which make such an approach susceptible to higher position measurement variability. Matched filtering used for identifying the beam position results in greater stability of position measurement compared to that obtained using the centroiding technique. However, this gain is achieved at the expense of extra processing time required for each beam image. In this work we explore the possibility of using a field programmable logic array (FPGA) to speed up these computations. The results indicate a performance improvement of 20 using the FPGA relative to a 3 GHz Pentium 4 processor.

  18. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  19. Fast CT-CT fluoroscopy registration with respiratory motion compensation for image-guided lung intervention

    NASA Astrophysics Data System (ADS)

    Su, Po; Xue, Zhong; Lu, Kongkuo; Yang, Jianhua; Wong, Stephen T.

    2012-02-01

    CT-fluoroscopy (CTF) is an efficient imaging method for guiding percutaneous lung interventions such as biopsy. During CTF-guided biopsy procedure, four to ten axial sectional images are captured in a very short time period to provide nearly real-time feedback to physicians, so that they can adjust the needle as it is advanced toward the target lesion. Although popularly used in clinics, this traditional CTF-guided intervention procedure may require frequent scans and cause unnecessary radiation exposure to clinicians and patients. In addition, CTF only generates limited slices of images and provides limited anatomical information. It also has limited response to respiratory movements and has narrow local anatomical dynamics. To better utilize CTF guidance, we propose a fast CT-CTF registration algorithm with respiratory motion estimation for image-guided lung intervention using electromagnetic (EM) guidance. With the pre-procedural exhale and inhale CT scans, it would be possible to estimate a series of CT images of the same patient at different respiratory phases. Then, once a CTF image is captured during the intervention, our algorithm can pick the best respiratory phase-matched 3D CT image and performs a fast deformable registration to warp the 3D CT toward the CTF. The new 3D CT image can be used to guide the intervention by superimposing the EM-guided needle location on it. Compared to the traditional repetitive CTF guidance, the registered CT integrates both 3D volumetric patient data and nearly real-time local anatomy for more effective and efficient guidance. In this new system, CTF is used as a nearly real-time sensor to overcome the discrepancies between static pre-procedural CT and the patient's anatomy, so as to provide global guidance that may be supplemented with electromagnetic (EM) tracking and to reduce the number of CTF scans needed. In the experiments, the comparative results showed that our fast CT-CTF algorithm can achieve better registration

  20. Tracker: Image-Processing and Object-Tracking System Developed

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Theodore W.

    1999-01-01

    Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in

  1. Digital Image Processing Applied To Quality Assurance In Mineral Industry

    NASA Astrophysics Data System (ADS)

    Hamrouni, Zouheir; Ayache, Alain; Krey, Charlie J.

    1989-03-01

    In this paper , we bring forward an application of vision in the domain of quality assurance in mineral industry of talc. By using image processing and computer vision means, the proposed real time whiteness captor system intends: - to inspect the whiteness of grinded product, - to manage the mixing of primary talcs before grinding, in order to obtain a final product with predetermined whiteness. The system uses the robotic CCD microcamera MICAM (designed by our laboratory and presently manufactured), a micro computer system based on Motorola 68020 and real time image processing boards. It has the industrial following specifications: - High reliability - Whiteness is determined with a 0.3% precision on a scale of 25 levels. Because of the expected precision, we had to study carefully the lighting system, the type of image captor and associated electronics. The first developped softwares are able to process the withness of talcum powder; then we have conceived original algorithms to control withness of rough talc taking into account texture and shadows. The processing times of these algorithms are completely compatible with industrial rates. This system can be applied to other domains where high precision reflectance captor is needed: industry of paper, paints, ...

  2. [Color processing of ultrasonographic images in extracorporeal lithotripsy].

    PubMed

    Lardennois, B; Ziade, A; Walter, K

    1991-02-01

    A number of technical difficulties are encountered in the ultrasonographic detection of renal stones which unfortunately limit its performance. The margin of error of firing in extracorporeal shock-wave lithotripsy (ESWL) must be reduced to a minimum. The role of the ultrasonographic monitoring during lithotripsy is also essential: continuous control of the focussing of the short-wave beamand assessment if the quality of fragmentation. The authors propose to improve ultrasonographic imaging in ESWL by means of intraoperative colour processing of the stone. Each shot must be directed to its target with an economy of vision avoiding excessive fatigue. The principle of the technique consists of digitalization of the ultrasound video images using a Macintosh Mac 2 computer. The Graphis Paint II program is interfaced directly with the Quick Capture card and recovers the images on its work surface in real time. The program is then able to attribute to each of these 256 shades of grey any one of the 16.6 million colours of the Macintosh universe with specific intensity and saturation. During fragmentation, using the principle of a palette, the stone changes colour from green to red indicating complete fragmentation. A Color Space card converts the digital image obtained into a video analogue source which is visualized on the monitor. It can be superimposed and/or juxtaposed with the source image by means of a multi-standard mixing table. Colour processing of ultrasonographic images in extracoporeal shockwave lithotripsy allows better visualization of the stones and better follow-up of fragmentation and allows the shockwave treatment to be stopped earlier. It increases the stone-free performance at 6 months. This configuration will eventually be able to integrate into the ultrasound apparatus itself.

  3. Assessment of banana fruit maturity by image processing technique.

    PubMed

    Surya Prabha, D; Satheesh Kumar, J

    2015-03-01

    Maturity stage of fresh banana fruit is an important factor that affects the fruit quality during ripening and marketability after ripening. The ability to identify maturity of fresh banana fruit will be a great support for farmers to optimize harvesting phase which helps to avoid harvesting either under-matured or over-matured banana. This study attempted to use image processing technique to detect the maturity stage of fresh banana fruit by its color and size value of their images precisely. A total of 120 images comprising 40 images from each stage such as under-mature, mature and over-mature were used for developing algorithm and accuracy prediction. The mean color intensity from histogram; area, perimeter, major axis length and minor axis length from the size values, were extracted from the calibration images. Analysis of variance between each maturity stage on these features indicated that the mean color intensity and area features were more significant in predicting the maturity of banana fruit. Hence, two classifier algorithms namely, mean color intensity algorithm and area algorithm were developed and their accuracy on maturity detection was assessed. The mean color intensity algorithm showed 99.1 % accuracy in classifying the banana fruit maturity. The area algorithm classified the under-mature fruit at 85 % accuracy. Hence the maturity assessment technique proposed in this paper could be used commercially to develop a field based complete automatic detection system to take decision on the right time of harvest by the banana growers. PMID:25745200

  4. Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1990-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.

  5. Recent Advances in Techniques for Hyperspectral Image Processing

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; Marconcini, Mattia; Tilton, James C.; Trianni, Giovanna

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  6. Application of ultrasound processed images in space: assessing diffuse affectations

    NASA Astrophysics Data System (ADS)

    Pérez-Poch, A.; Bru, C.; Nicolau, C.

    The purpose of this study was to evaluate diffuse affectations in the liver using texture image processing techniques. Ultrasound diagnose equipments are the election of choice to be used in space environments as they are free from hazardous effects on health. However, due to the need for highly trained radiologists to assess the images, this imaging method is mainly applied on focal lesions rather than on non-focal ones. We have conducted a clinical study on 72 patients with different degrees of chronic hepatopaties and a group of control of 18 individuals. All subjects' clinical reports and results of biopsies were compared to the degree of affectation calculated by our computer system , thus validating the method. Full statistical results are given in the present paper showing a good correlation (r=0.61) between pathologist's report and analysis of the heterogenicity of the processed images from the liver. This computer system to analyze diffuse affectations may be used in-situ or via telemedicine to the ground.

  7. High Speed Data Processing for Imaging MS-Based Molecular Histology Using Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Jones, Emrys A.; van Zeijl, René J. M.; Andrén, Per E.; Deelder, André M.; Wolters, Lex; McDonnell, Liam A.

    2012-04-01

    Imaging MS enables the distributions of hundreds of biomolecular ions to be determined directly from tissue samples. The application of multivariate methods, to identify pixels possessing correlated MS profiles, is referred to as molecular histology as tissues can be annotated on the basis of the MS profiles. The application of imaging MS-based molecular histology to larger tissue series, for clinical applications, requires significantly increased computational capacity in order to efficiently analyze the very large, highly dimensional datasets. Such datasets are highly suited to processing using graphical processor units, a very cost-effective solution for high speed processing. Here we demonstrate up to 13× speed improvements for imaging MS-based molecular histology using off-the-shelf components, and demonstrate equivalence with CPU based calculations. It is then discussed how imaging MS investigations may be designed to fully exploit the high speed of graphical processor units.

  8. Seam tracking with texture based image processing for laser materials processing

    NASA Astrophysics Data System (ADS)

    Krämer, S.; Fiedler, W.; Drenker, A.; Abels, P.

    2014-02-01

    This presentation deals with a camera based seam tracking system for laser materials processing. The digital high speed camera records interaction point and illuminated work piece surface. The camera system is coaxially integrated into the laser beam path. The aim is to observe interaction point and joint gap in one image for a closed loop control of the welding process. Especially for the joint gap observation a new image processing method is used. Basic idea is to detect a difference between the textures of the surface of the two work pieces to be welded together instead of looking for a nearly invisible narrow line imaged by the joint gap. The texture based analysis of the work piece surface is more robust and less affected by varying illumination conditions than conventional grey scale image processing. This technique of image processing gives in some cases the opportunity for real zero gap seam tracking. In a condensed view economic benefits are simultaneous laser and seam tracking for self-calibrating laser welding applications without special seam pre preparation for seam tracking.

  9. Automatic framework for extraction and characterization of wetting front propagation using tomographic image sequences of water infiltrated soils.

    PubMed

    Vasquez, Dionicio; Scharcanski, Jacob; Wong, Alexander

    2015-01-01

    This paper presents a new automatic framework for extracting and characterizing the dynamic shape of the 3D wetting front and its propagation, based in a sequence of tomographic images acquired as water (moisture) infiltrates in unsaturated soils. To the best of the authors' knowledge, the shape of the 3D wetting front and its propagation and progress over time has not been previously produced as a whole by methods in existing literature. The proposed automatic framework is composed two important and integrated modules: i) extraction of the 3D wetting front, and ii) characterization and description of the 3D wetting front to obtain important information about infiltration process. The 3D wetting front surface is segmented from 3D CT imagery provided as input via a 3D stochastic region merging strategy using quadric-regressed bilateral space-scale representations. Based on the 3D segmentation results, the normal directions at local curvature maxima of the wetting front surface are computed for 3D images of soil moisture, and its propagation is analyzed at the local directions in sites of maximal water adsorption, and described using histograms of curvature changes over time in response to sample saturation. These curvature change descriptors provide indirect measurements of moisture infiltration in soils, and soil saturation. Results using a field tomograph equipment specific for soil studies are encouraging, and suggest that the proposed automatic framework can be applied to estimate the infiltration of water in soils in 3D and in time. PMID:25602498

  10. Two-dimensional signal processing with application to image restoration

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1974-01-01

    A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.

  11. SHORT COMMUNICATION: An image processing approach to calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lorefice, S.; Malengo, A.

    2004-06-01

    The usual method adopted for multipoint calibration of glass hydrometers is based on the measurement of the buoyancy by hydrostatic weighing when the hydrometer is plunged in a reference liquid up to the scale mark to be calibrated. An image processing approach is proposed by the authors to align the relevant scale mark with the reference liquid surface level. The method uses image analysis with a data processing technique and takes into account the perspective error. For this purpose a CCD camera with a pixel matrix of 604H × 576V and a lens of 16 mm focal length were used. High accuracy in the hydrometer reading was obtained as the resulting reading uncertainty was lower than 0.02 mm, about a fifth of the usual figure with the visual reading made by an operator.

  12. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1973-01-01

    With the successful flight of the ERTS-1 satellite, orbital height images are available for precision processing into products such as 1:1,000,000-scale photomaps and enlargements up to 1:250,000 scale. In order to maintain positional error below 100 meters, control points for the precision processing must be carefully selected, clearly definitive on photos in both X and Y. Coordinates of selected control points measured on existing ½ and 15-minute standard maps provide sufficient accuracy for any space imaging system thus far defined. This procedure references the points to accepted horizontal and vertical datums. Maps as small as 1:250,000 scale can be used as source material for coordinates, but to maintain the desired accuracy, maps of 1:100,000 and larger scale should be used when available.

  13. Reconstruction of mechanically recorded sound by image processing

    SciTech Connect

    Fadeyev, Vitaliy; Haber, Carl

    2003-03-26

    Audio information stored in the undulations of grooves in a medium such as a phonograph record may be reconstructed, with no or minimal contact, by measuring the groove shape using precision metrology methods and digital image processing. The effects of damage, wear, and contamination may be compensated, in many cases, through image processing and analysis methods. The speed and data handling capacity of available computing hardware make this approach practical. Various aspects of this approach are discussed. A feasibility test is reported which used a general purpose optical metrology system to study a 50 year old 78 r.p.m. phonograph record. Comparisons are presented with stylus playback of the record and with a digitally re-mastered version of the original magnetic recording. A more extensive implementation of this approach, with dedicated hardware and software, is considered.

  14. Crystallographic Image Processing Software for Scanning Probe Microscopists

    NASA Astrophysics Data System (ADS)

    Plachinda, Pavel; Moon, Bill; Moeck, Peter

    2010-03-01

    Following the common practice of structural electron crystallography, scanning probe microscopy (SPM) images can be processed ``crystallographically'' [1,2]. An estimate of the point spread function of the SPM can be obtained and subsequently its influence removed from the images. Also a difference Fourier synthesis can be calculated in order to enhance the visibility of structural defects. We are currently in the process of developing dedicated PC-based software for the wider SPM community. [4pt] [1] P. Moeck, B. Moon Jr., M. Abdel-Hafiez, and M. Hietschold, Proc. NSTI 2009, Houston, May 3-7, 2009, Vol. I (2009) 314-317, (ISBN: 978-1-4398-1782-7). [0pt] [2] P. Moeck, M. Toader, M. Abdel-Hafiez, and M. Hietschold, Proc. 2009 International Conference on Frontiers of Characterization and Metrology for Nanoelectronics, May 11-14, 2009, Albany, New York, Best Paper Award

  15. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  16. Automatic Denoising and Unmixing in Hyperspectral Image Processing

    NASA Astrophysics Data System (ADS)

    Peng, Honghong

    This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein

  17. PRONET services for distance learning in mammographic image processing.

    PubMed

    Costaridou, L; Panayiotakis, G; Efstratiou, C; Sakellaropoulos, P; Cavouras, D; Kalogeropoulou, C; Varaki, K; Giannakou, L; Dimopoulos, J

    1997-01-01

    The potential of telematics services is investigated with respect to learning needs of medical physicists and biomedical engineers. Telematics services are integrated into a system, the PRONET, which evolves around multimedia computer based courses and distance tutoring support. In addition, information database access and special interest group support are offered. System architecture is based on a component integration approach. The services are delivered in three modes: LAN, ISDN and Internet. Mammographic image processing is selected as an example content area. PMID:10179585

  18. Ultra high speed image processing techniques. [electronic packaging techniques

    NASA Technical Reports Server (NTRS)

    Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.

    1981-01-01

    Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.

  19. Analysis of a multiple reception model for processing images from the solid-state imaging camera

    NASA Technical Reports Server (NTRS)

    Yan, T.-Y.

    1991-01-01

    A detection model to identify the presence of Galileo optical communications from an Earth-based Transmitter (GOPEX) signal by processing multiple signal receptions extracted from the camera images is described. The model decomposes a multi-signal reception camera image into a set of images so that the location of the pixel being illuminated is known a priori and the laser can illuminate only one pixel at each reception instance. Numerical results show that if effects on the pointing error due to atmospheric refraction can be controlled to between 20 to 30 microrad, the beam divergence of the GOPEX laser should be adjusted to be between 30 to 40 microrad when the spacecraft is 30 million km away from Earth. Furthermore, increasing beyond 5 the number of receptions for processing will not produce a significant detection probability advantage.

  20. Optical Fourier techniques for medical image processing and phase contrast imaging

    PubMed Central

    Yelleswarapu, Chandra S.; Kothapalli, Sri-Rajasekhar; Rao, D.V.G.L.N.

    2008-01-01

    This paper briefly reviews the basics of optical Fourier techniques (OFT) and applications for medical image processing as well as phase contrast imaging of live biological specimens. Enhancement of microcalcifications in a mammogram for early diagnosis of breast cancer is the main focus. Various spatial filtering techniques such as conventional 4f filtering using a spatial mask, photoinduced polarization rotation in photosensitive materials, Fourier holography, and nonlinear transmission characteristics of optical materials are discussed for processing mammograms. We also reviewed how the intensity dependent refractive index can be exploited as a phase filter for phase contrast imaging with a coherent source. This novel approach represents a significant advance in phase contrast microscopy. PMID:18458764

  1. Algorithms for lineaments detection in processing of multispectral images

    NASA Astrophysics Data System (ADS)

    Borisova, D.; Jelev, G.; Atanassov, V.; Koprinkova-Hristova, Petia; Alexiev, K.

    2014-10-01

    Satellite remote sensing is a universal tool to investigate the different areas of Earth and environmental sciences. The advancement of the implementation capabilities of the optoelectronic devices which are long-term-tested in the laboratory and the field and are mounted on-board of the remote sensing platforms further improves the capability of instruments to acquire information about the Earth and its resources in global, regional and local scales. With the start of new high-spatial and spectral resolution satellite and aircraft imagery new applications for large-scale mapping and monitoring becomes possible. The integration with Geographic Information Systems (GIS) allows a synergistic processing of the multi-source spatial and spectral data. Here we present the results of a joint project DFNI I01/8 funded by the Bulgarian Science Fund focused on the algorithms of the preprocessing and the processing spectral data by using the methods of the corrections and of the visual and automatic interpretation. The objects of this study are lineaments. The lineaments are basically the line features on the earth's surface which are a sign of the geological structures. The geological lineaments usually appear on the multispectral images like lines or edges or linear shapes which is the result of the color variations of the surface structures. The basic geometry of a line is orientation, length and curve. The detection of the geological lineaments is an important operation in the exploration for mineral deposits, in the investigation of active fault patterns, in the prospecting of water resources, in the protecting people, etc. In this study the integrated approach for the detecting of the lineaments is applied. It combines together the methods of the visual interpretation of various geological and geographical indications in the multispectral satellite images, the application of the spatial analysis in GIS and the automatic processing of the multispectral images by Canny

  2. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    NASA Astrophysics Data System (ADS)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory

  3. Quantitative assessment of susceptibility weighted imaging processing methods

    PubMed Central

    Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.

    2013-01-01

    Purpose To evaluate different susceptibility weighted imaging (SWI) phase processing methods and parameter selection, thereby improving understanding of potential artifacts, as well as facilitating choice of methodology in clinical settings. Materials and Methods Two major phase processing methods, Homodyne-filtering and phase unwrapping-high pass (HP) filtering, were investigated with various phase unwrapping approaches, filter sizes, and filter types. Magnitude and phase images were acquired from a healthy subject and brain injury patients on a 3T clinical Siemens MRI system. Results were evaluated based on image contrast to noise ratio and presence of processing artifacts. Results When using a relatively small filter size (32 pixels for the matrix size 512 × 512 pixels), all Homodyne-filtering methods were subject to phase errors leading to 2% to 3% masked brain area in lower and middle axial slices. All phase unwrapping-filtering/smoothing approaches demonstrated fewer phase errors and artifacts compared to the Homodyne-filtering approaches. For performing phase unwrapping, Fourier-based methods, although less accurate, were 2–4 orders of magnitude faster than the PRELUDE, Goldstein and Quality-guide methods. Conclusion Although Homodyne-filtering approaches are faster and more straightforward, phase unwrapping followed by HP filtering approaches perform more accurately in a wider variety of acquisition scenarios. PMID:24923594

  4. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    PubMed

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  5. Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing

    PubMed Central

    He, Diwei; Nguyen, Hoang C.; Hayes-Gill, Barrie R.; Zhu, Yiqun; Crowe, John A.; Gill, Cally; Clough, Geraldine F.; Morgan, Stephen P.

    2013-01-01

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue. PMID:24051525

  6. Processing Earth Observing images with Ames Stereo Pipeline

    NASA Astrophysics Data System (ADS)

    Beyer, R. A.; Moratto, Z. M.; Alexandrov, O.; Fong, T.; Shean, D. E.; Smith, B. E.

    2013-12-01

    ICESat with its GLAS instrument provided valuable elevation measurements of glaciers. The loss of this spacecraft caused a demand for alternative elevation sources. In response to that, we have improved our Ames Stereo Pipeline (ASP) software (version 2.1+) to ingest satellite imagery from Earth satellite sources in addition to its support of planetary missions. This enables the open source community a free method to generate digital elevation models (DEM) from Digital Globe stereo imagery and alternatively other cameras using RPC camera models. Here we present details of the software. ASP is a collection of utilities written in C++ and Python that implement stereogrammetry. It contains utilities to manipulate DEMs, project imagery, create KML image quad-trees, and perform simplistic 3D rendering. However its primary application is the creation of DEMs. This is achieved by matching every pixel between the images of a stereo observation via a hierarchical coarse-to-fine template matching method. Matched pixels between images represent a single feature that is triangulated using each image's camera model. The collection of triangulated features represents a point cloud that is then grid resampled to create a DEM. In order for ASP to match pixels/features between images, it requires a search range defined in pixel units. Total processing time is proportional to the area of the first image being matched multiplied by the area of the search range. An incorrect search range for ASP causes repeated false positive matches at each level of the image pyramid and causes excessive processing times with no valid DEM output. Therefore our system contains automatic methods for deducing what the correct search range should be. In addition, we provide options for reducing the overall search range by applying affine epipolar rectification, homography transform, or by map projecting against a prior existing low resolution DEM. Depending on the size of the images, parallax, and image

  7. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  8. Airborne Laser Scanning and Image Processing Techniques for Archaeological Prospection

    NASA Astrophysics Data System (ADS)

    Faltýnová, M.; Nový, P.

    2014-06-01

    Aerial photography was, for decades, an invaluable tool for archaeological prospection, in spite of the limitation of this method to deforested areas. The airborne laser scanning (ALS) method can be nowadays used to map complex areas and suitable complement earlier findings. This article describes visualization and image processing methods that can be applied on digital terrain models (DTMs) to highlight objects hidden in the landscape. Thanks to the analysis of visualized DTM it is possible to understand the landscape evolution including the differentiation between natural processes and human interventions. Different visualization methods were applied on a case study area. A system of parallel tracks hidden in a forest and its surroundings - part of old route called "Devil's Furrow" near the town of Sázava was chosen. The whole area around well known part of Devil's Furrow has not been prospected systematically yet. The data from the airborne laser scanning acquired by the Czech Office for Surveying, Mapping and Cadastre was used. The average density of the point cloud was approximately 1 point/m2 The goal of the project was to visualize the utmost smallest terrain discontinuities, e.g. tracks and erosion furrows, which some were not wholly preserved. Generally we were interested in objects that are clearly not visible in DTMs displayed in the form of shaded relief. Some of the typical visualization methods were tested (shaded relief, aspect and slope image). To get better results we applied image-processing methods that were successfully used on aerial photographs or hyperspectral images in the past. The usage of different visualization techniques on one site allowed us to verify the natural character of the southern part of Devil's Furrow and find formations up to now hidden in the forests.

  9. SENTINEL-2 image quality and level 1 processing

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Baillarin, Simon; Gascon, Ferran; Hillairet, Emmanuel; Dechoz, Cécile; Lacherade, Sophie; Martimort, Philippe; Spoto, François; Henry, Patrice; Duca, Riccardo

    2009-08-01

    In the framework of the Global Monitoring for Environment and Security (GMES) programme, the European Space Agency (ESA) in partnership with the European Commission (EC) is developing the SENTINEL-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a twin satellites configuration deployed in polar sun-synchronous orbit and is designed to offer a unique combination of systematic global coverage with a wide field of view (290km), a high revisit (5 days at equator with two satellites), a high spatial resolution (10m, 20m and 60 m) and multi-spectral imagery (13 bands in the visible and the short wave infrared spectrum). SENTINEL-2 will ensure data continuity of SPOT and LANDSAT multispectral sensors while accounting for future service evolution. This paper presents the main geometric and radiometric image quality requirements for the mission. The strong multi-spectral and multi-temporal registration requirements constrain the stability of the platform and the ground processing which will automatically refine the geometric physical model through correlation technics. The geolocation of the images will take benefits from a worldwide reference data set made of SENTINEL-2 data strips geolocated through a global space-triangulation. These processing are detailed through the description of the level 1C production which will provide users with ortho-images of Top of Atmosphere reflectances. The huge amount of data (1.4 Tbits per orbit) is also a challenge for the ground processing which will produce at level 1C all the acquired data. Finally we discuss the different geometric (line of sight, focal plane cartography, ...) and radiometric (relative and absolute camera sensitivity) in-flight calibration methods that will take advantage of the on-board sun diffuser and ground targets to answer the severe mission requirements.

  10. The Spectral Image Processing System (SIPS) - Interactive visualization and analysis of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1993-01-01

    The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).

  11. An adaptive image segmentation process for the classification of lung biopsy images

    NASA Astrophysics Data System (ADS)

    McKee, Daniel W.; Land, Walker H., Jr.; Zhukov, Tatyana; Song, Dansheng; Qian, Wei

    2006-03-01

    The purpose of this study was to develop a computer-based second opinion diagnostic tool that could read microscope images of lung tissue and classify the tissue sample as normal or cancerous. This problem can be broken down into three areas: segmentation, feature extraction and measurement, and classification. We introduce a kernel-based extension of fuzzy c-means to provide a coarse initial segmentation, with heuristically-based mechanisms to improve the accuracy of the segmentation. The segmented image is then processed to extract and quantify features. Finally, the measured features are used by a Support Vector Machine (SVM) to classify the tissue sample. The performance of this approach was tested using a database of 85 images collected at the Moffitt Cancer Center and Research Institute. These images represent a wide variety of normal lung tissue samples, as well as multiple types of lung cancer. When used with a subset of the data containing images from the normal and adenocarcinoma classes, we were able to correctly classify 78% of the images, with a ROC A Z of 0.758.

  12. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is

  13. Demineralization Depth Using QLF and a Novel Image Processing Software.

    PubMed

    Wu, Jun; Donly, Zachary R; Donly, Kevin J; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization.

  14. Demineralization Depth Using QLF and a Novel Image Processing Software

    PubMed Central

    Wu, Jun; Donly, Zachary R.; Donly, Kevin J.; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization. PMID:20445755

  15. Ameliorating mammograms by using novel image processing algorithms

    NASA Astrophysics Data System (ADS)

    Pillai, A.; Kwartowitz, D.

    2014-03-01

    Mammography is one of the most important tools for the early detection of breast cancer typically through detection of characteristic masses and/or micro calcifications. Digital mammography has become commonplace in recent years. High quality mammogram images are large in size, providing high-resolution data. Estimates of the false negative rate for cancers in mammography are approximately 10%-30%. This may be due to observation error, but more frequently it is because the cancer is hidden by other dense tissue in the breast and even after retrospective review of the mammogram, cannot be seen. In this study, we report on the results of novel image processing algorithms that will enhance the images providing decision support to reading physicians. Techniques such as Butterworth high pass filtering and Gabor filters will be applied to enhance images; followed by segmentation of the region of interest (ROI). Subsequently, the textural features will be extracted from the ROI, which will be used to classify the ROIs as either masses or non-masses. Among the statistical methods most used for the characterization of textures, the co-occurrence matrix makes it possible to determine the frequency of appearance of two pixels separated by a distance, at an angle from the horizontal. This matrix contains a very large amount of information that is complex. Therefore, it is not used directly but through measurements known as indices of texture such as average, variance, energy, contrast, correlation, normalized correlation and entropy.

  16. Image processing methods and architectures in diagnostic pathology.

    PubMed

    Bueno, Gloria; Déniz, Oscar; Salido, Jesús; Rojo, Marcial García

    2009-01-01

    Grid technology has enabled the clustering and the efficient and secure access to and interaction among a wide variety of geographically distributed resources such as: supercomputers, storage systems, data sources, instruments and special devices and services. Their main applications include large-scale computational and data intensive problems in science and engineering. General grid structures and methodologies for both software and hardware in image analysis for virtual tissue-based diagnosis has been considered in this paper. This methods are focus on the user level middleware. The article describes the distributed programming system developed by the authors for virtual slide analysis in diagnostic pathology. The system supports different image analysis operations commonly done in anatomical pathology and it takes into account secured aspects and specialized infrastructures with high level services designed to meet application requirements. Grids are likely to have a deep impact on health related applications, and therefore they seem to be suitable for tissue-based diagnosis too. The implemented system is a joint application that mixes both Web and Grid Service Architecture around a distributed architecture for image processing. It has shown to be a successful solution to analyze a big and heterogeneous group of histological images under architecture of massively parallel processors using message passing and non-shared memory.

  17. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  18. Image processing methods and architectures in diagnostic pathology.

    PubMed

    Bueno, Gloria; Déniz, Oscar; Salido, Jesús; Rojo, Marcial García

    2009-01-01

    Grid technology has enabled the clustering and the efficient and secure access to and interaction among a wide variety of geographically distributed resources such as: supercomputers, storage systems, data sources, instruments and special devices and services. Their main applications include large-scale computational and data intensive problems in science and engineering. General grid structures and methodologies for both software and hardware in image analysis for virtual tissue-based diagnosis has been considered in this paper. This methods are focus on the user level middleware. The article describes the distributed programming system developed by the authors for virtual slide analysis in diagnostic pathology. The system supports different image analysis operations commonly done in anatomical pathology and it takes into account secured aspects and specialized infrastructures with high level services designed to meet application requirements. Grids are likely to have a deep impact on health related applications, and therefore they seem to be suitable for tissue-based diagnosis too. The implemented system is a joint application that mixes both Web and Grid Service Architecture around a distributed architecture for image processing. It has shown to be a successful solution to analyze a big and heterogeneous group of histological images under architecture of massively parallel processors using message passing and non-shared memory. PMID:20430740

  19. SIMS image processing methods for petroleum cracking catalyst characterization

    SciTech Connect

    Leta, D.P.; Lamberti, W.A.; Disko, M.M.; Kugler, E.L.; Varady, W.A. )

    1990-08-01

    The technique of Imaging Secondary Ion Mass Spectrometry (SIMS) has proven to be very well suited to the characterization of fluidized petroleum cracking catalysts (FCC). The ability to view elemental distributions with 0.5 micron spatial resolution at concentrations in the ppm range mates well with the submicron phases and low concentration contaminants present in commercial multi-component FCC particles. The use of ultra-low light level imaging systems with the intrinsically sensitive SIMS technique makes real time viewing of many of the elements important in FCC catalysts possible. Aluminum, silicon and the rare earth elements serve to define the major phases present within each catalyst particle, while the transition row elements and all of the alkali and alkaline elements may be seen at trace concentrations. Of particular importance is the use of the technique to study the distributions of nickel and vanadium which are the most deleterious of the contaminant metals. Modern image processing computers and software now allow the rapid quantitative analysis of SIMS elemental images in order to more clearly reveal the locations of the catalyst phases and the quantitative distributions of the contaminant metals on those phases. Although the analysis techniques discussed in this study may be applied to any of the contaminant elements, for simplicity the authors will limit their examples to the major catalyst elements, and the nickel and vanadium contaminants.

  20. Alternative method for Hamilton-Jacobi PDEs in image processing

    NASA Astrophysics Data System (ADS)

    Lagoutte, A.; Salat, H.; Vachier, C.

    2011-03-01

    Multiscale signal analysis has been used since the early 1990s as a powerful tool for image processing, notably in the linear case. However, nonlinear PDEs and associated nonlinear operators have advantages over linear operators, notably preserving important features such as edges in images. In this paper, we focus on nonlinear Hamilton-Jacobi PDEs defined with adaptive speeds or, alternatively, on adaptive morphological fiters also called semi-flat morphological operators. Semi-flat morphology were instroduced by H. Heijmans and studied only in the case where the speed (or equivalently the filtering parameter) is a decreasing function of the luminance. It is proposed to extend the definition suggested by H. Heijmans in the case of non decreasing speeds. We also prove that a central property for defining morphological filters, that is the adjunction property, is preserved while dealing with our extended definitions. Finally experimental applications are presented on actual images, including connection of thin lines by semi-flat dilations and image filtering by semi-flat openings.

  1. Processing of AUV Sidescan Sonar Images for Enhancement and Classification

    NASA Astrophysics Data System (ADS)

    Honsho, C.; Asada, A.; Ura, T.; Kim, K.

    2014-12-01

    An arc volcano hosting a hydrothermal field was surveyed by using an autonomous underwater vehicle equipped with a sidescan sonar system and a multibeam echo sounder. The survey area is relatively small in area but has large variations in bathymetry and geology. To correct large geometric distortions in sidescan images, actual topographic cross sections cut by fan beams were taken into consideration instead of assuming flat bottoms. Beam pattern corrections were cautiously performed in combination with theoretical radiometric corrections for slant range and incident angle. These detailed geometric and radiometric corrections were efficient to patch neighboring images and build a complete picture of the whole survey area. Three textural attributes were computed from the corrected images by means of grey level co-occurrence matrices and used for the seafloor classification. As no ground truth data were available to us, we used a cluster analysis for the classification and obtained a result that seems relevant to the geological features suggested by the topography. Moreover, slopes of the caldera wall and of the central cones are clearly differentiated in the classification result, though the difference is not immediately obvious to our eyes. As one of the classes clearly delineates a known hydrothermal field, we expect by analogy that this class will highlight hydrothermal features in the survey area, helping to detect potential targets to be specifically investigated for mineral exploration. Numerical processing of sonar images effectively complements their visual inspection with human eyes and is helpful in providing a different perspective.

  2. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  3. Computerized image processing in the Reginald Denny beating trial

    NASA Astrophysics Data System (ADS)

    Morrison, Lawrence C.

    1997-02-01

    New image processing techniques may have significant benefits to law enforcement officials but need to be legally admissible in court. Courts have different tests for determining the admissibility of new scientific procedures, requiring their reliability to be established by expert testimony. The first test developed was whether there has been general acceptance of the new procedure within the scientific community. In 1993 the U.S. Supreme Court loosened the requirements for admissibility of new scientific techniques, although the California Supreme Court later retained the general acceptance test. What the proper standard is for admission of such evidence is important to both the technical community and to the legal community because of the conflict between benefits of rapidly developing technology, and the dangers of 'junk science.' The Reginald Denny beating case from the 1992 Los Angeles riots proved the value of computerized image processing in identifying persons committing crimes on videotape. The segmentation process was used to establish the presence of a tattoo on one defendant, which was key in his identification. Following the defendant's conviction, the California Court of Appeal approved the use of the evidence involving the segmentation process. This published opinion may be cited as legal precedent.

  4. From acoustic segmentation to language processing: evidence from optical imaging.

    PubMed

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use "anchors" to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, "guide" the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development.

  5. Teaching digital image processing and computer vision in a quantitative imaging electronic classroom

    NASA Astrophysics Data System (ADS)

    Sonka, Milan

    1998-06-01

    In 1996, the University of Iowa launched a multiphase project for the development of a well-structured interdisciplinary image systems engineering curriculum with both depth and breadth in its offerings. This project has been supported by equipment grants from the Hewlett Packard Company. The new teaching approach that we are currently developing is very dissimilar to that we used in previous years. Lectures consist of presentation of concepts, immediately followed by examples, and practical exploratory problems. Six image processing classes have been offered in the new collaborative learning environment during the first two academic years. This paper outlines the employed educational approach we are taking and summarizes our early experience.

  6. IBIS - A geographic information system based on digital image processing and image raster datatype

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.

    1976-01-01

    IBIS (Image Based Information System) is a geographic information system which makes use of digital image processing techniques to interface existing geocoded data sets and information management systems with thematic maps and remotely sensed imagery. The basic premise is that geocoded data sets can be referenced to a raster scan that is equivalent to a grid cell data set. The first applications (St. Tammany Parish, Louisiana, and Los Angeles County) have been restricted to the design of a land resource inventory and analysis system. It is thought that the algorithms and the hardware interfaces developed will be readily applicable to other Landsat imagery.

  7. An effective data acquisition system using image processing

    NASA Astrophysics Data System (ADS)

    Poh, Chung-How; Poh, Chung-Kiak

    2005-12-01

    The authors investigate a data acquisition system utilising the widely available digital multi-meter and the webcam. The system is suited for applications that require sampling rates of less than about 1 Hz, such as for ambient temperature recording or the monitoring of the charging state of rechargeable batteries. The data displayed on the external digital readout is acquired into the computer through the process of template matching. MATLAB is used as the programming language for processing the captured 2-D images in this demonstration. A RC charging experiment with a time characteristic of approximately 33 s is setup to verify the accuracy of the image-to-data conversion. It is found that the acquired data matches the steady-state voltage value displayed by the digital meter after an error detection technique has been devised and implemented into the data acquisition script file. It is possible to acquire a number of different readings simultaneously from various sources with this imaging method by placing a number of digital readouts within the camera's field-of-view.

  8. PET/CT for radiotherapy: image acquisition and data processing.

    PubMed

    Bettinardi, V; Picchio, M; Di Muzio, N; Gianolli, L; Messa, C; Gilardi, M C

    2010-10-01

    This paper focuses on acquisition and processing methods in positron emission tomography/computed tomography (PET/CT) for radiotherapy (RT) applications. The recent technological evolutions of PET/CT systems are described. Particular emphasis is dedicated to the tools needed for the patient positioning and immobilization, to be used in PET/CT studies as well as during RT treatment sessions. The effect of organ and lesion motion due to patient's respiration on PET/CT imaging is discussed. Breathing protocols proposed to minimize PET/CT spatial mismatches in relation to respiratory movements are illustrated. The respiratory gated (RG) 4D-PET/CT techniques, developed to measure and compensate for organ and lesion motion, are then introduced. Finally a description is provided of different acquisition and data processing techniques, implemented with the aim at improving: i) image quality and quantitative accuracy of PET images, and ii) target volume definition and treatment planning in RT, by using specific and personalised motion information.

  9. Experiences with transputer systems for high-speed image processing

    NASA Astrophysics Data System (ADS)

    Kille, Knut; Ahlers, Rolf-Juergen; Schneider, B.

    1991-03-01

    Concurrent processing is one approach to high speed image analysis. Transputer systems are flexible tools for parallel processing. Based on examples of the Fast Hartley Transform convolution and contour tracking the suitability of commercial transputer systems for industrial image processing applications is demonstrated. The main emphasis is put on the interfacing to the frame grabber and different potential hard and software configurations. Advantages restrictions and useful applications of current transputer systems for image analysis are discussed on hard and software level. Experiences and realized speedup factors will be presented. THE TRANSPUTER A typical member of the transputer product familiy is a single chip containing processor memory and communication links which provide point to point connection between transputers. A transputer can be used in a single processor system or in networks to build high performance concurrent systems. A network of transputers is easily constructed using pointtopoint communication. This has many advantages over multiprocessor buses. Transputers can be programmed in high level languages. To gain most benefit from the transputer architecture the whole system should be programmed in OCCAM a high level language which supports parallel processing. The features of the IMS T800 transputer ( see figure 1) are in detail: 32 bit architecture 33 ns internal cycle time 30 MIPS (peak) instruction rate 4 Mflops (peak) instruction rate 64 bit onchip floating point unit 4 Kbytes onchip static RAM 120 Mbytes/sec sustained rate to internal memory 4 Gbytes directly addressable external memory 40 Mbytes/sec sustained data rate to external memory Four serial links 5/10/20 Mbits/sec Bidirectional data rate of 2. 4 Mbytes/sec per link 76 / SP/E Vol. 1386 Machine Vision Systems Integration in lndustry(199

  10. Quantitative Evaluation of Strain Near Tooth Fillet by Image Processing

    NASA Astrophysics Data System (ADS)

    Masuyama, Tomoya; Yoshiizumi, Satoshi; Inoue, Katsumi

    The accurate measurement of strain and stress in a tooth is important for the reliable evaluation of the strength or life of gears. In this research, a strain measurement method which is based on image processing is applied to the analysis of strain near the tooth fillet. The loaded tooth is photographed using a CCD camera and stored as a digital image. The displacement of the point in the tooth flank is tracked by the cross-correlation method, and then, the strain is calculated. The interrogation window size of the correlation method and the overlap amount affect the accuracy and resolution. In the case of measurements at structures with complicated profiles such as fillets, the interrogation window maintains a large size and the overlap amount should be large. The surface condition also affects the accuracy. The white painted surface with a small black particle is suitable for measurement.

  11. 2D hexagonal quaternion Fourier transform in color image processing

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2016-05-01

    In this paper, we present a novel concept of the quaternion discrete Fourier transform on the two-dimensional hexagonal lattice, which we call the two-dimensional hexagonal quaternion discrete Fourier transform (2-D HQDFT). The concept of the right-side 2D HQDFT is described and the left-side 2-D HQDFT is similarly considered. To calculate the transform, the image on the hexagonal lattice is described in the tensor representation when the image is presented by a set of 1-D signals, or splitting-signals which can be separately processed in the frequency domain. The 2-D HQDFT can be calculated by a set of 1-D quaternion discrete Fourier transforms (QDFT) of the splitting-signals.

  12. Concurrent Image Processing Executive (CIPE). Volume 3: User's guide

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.; Kong, Mih-Seh

    1990-01-01

    CIPE (the Concurrent Image Processing Executive) is both an executive which organizes the parameter inputs for hypercube applications and an environment which provides temporary data workspace and simple real-time function definition facilities for image analysis. CIPE provides two types of user interface. The Command Line Interface (CLI) provides a simple command-driven environment allowing interactive function definition and evaluation of algebraic expressions. The menu interface employs a hierarchical screen-oriented menu system where the user is led through a menu tree to any specific application and then given a formatted panel screen for parameter entry. How to initialize the system through the setup function, how to read data into CIPE symbols, how to manipulate and display data through the use of executive functions, and how to run an application in either user interface mode, are described.

  13. Digital Image Processing Applied To Problems In Art And Archaeology

    NASA Astrophysics Data System (ADS)

    Asmus, John F.; Katz, Norman P.

    1988-12-01

    Many of the images encountered during scholarly studies in the fields of art and archaeology have deteriorated through the effects of time. The Ice-Age rock art of the now-closed caves near Lascaux are prime examples of this fate. However, faint and subtle details of these can be exceedingly important as some theories suggest that the designs are computers or calendars pertaining to astronomical cycles as well as seasons for hunting, gathering, and planting. Consequently, we have applied a range of standard image processing algorithms (viz., edge detection, spatial filtering, spectral differencing, and contrast enhancement) as well as specialized techniques (e.g., matched filters) to the clarification of these drawings. Also, we report the results of computer enhancement studies pertaining to authenticity, faint details, sitter identity, and age of portraits by da Vinci, Rembrandt, Rotari, and Titian.

  14. Fast image processing with a microcomputer applied to speckle photography

    NASA Astrophysics Data System (ADS)

    Erbeck, R.

    1985-11-01

    An automated image recognition system is described for speckle photography investigations in fluid dynamics. The system is employed for characterizing the pattern of interference fringes obtained using speckle interferometry. A rotating ground glass serves as a screen on which laser light passing through a specklegraph plate, the flow and a compensation plate (CP) is shone to produce a compensated Young's pattern. The image produced on the ground glass is photographed by a video camera whose signal is digitized and processed through a microcomputer using a 6502 CPU chip. The normalized correlation function of the intensity is calculated in two directions of the recorded pattern to obtain the wavelength and the light deflection angle. The system has a capability of one picture every two seconds. Sample data are provided for a free jet of CO2 issuing into air in both laminar and turbulent form.

  15. Seismic reflection imaging of mixing processes in Fram Strait

    NASA Astrophysics Data System (ADS)

    Sarkar, Sudipta; Sheen, Katy L.; Klaeschen, Dirk; Brearley, J. Alexander; Minshull, Timothy A.; Berndt, Christian; Hobbs, Richard W.; Naveira Garabato, Alberto C.

    2015-10-01

    The West Spitsbergen Current, which flows northward along the western Svalbard continental slope, transports warm and saline Atlantic water (AW) into the Arctic Ocean. A combined analysis of high-resolution seismic images and hydrographic sections across this current has uncovered the oceanographic processes involved in horizontal and vertical mixing of AW. At the shelf break, where a strong horizontal temperature gradient exists east of the warmest AW, isopycnal interleaving of warm AW and surrounding colder waters is observed. Strong seismic reflections characterize these interleaving features, with a negative polarity reflection arising from an interface of warm water overlying colder water. A seismic-derived sound speed image reveals the extent and lateral continuity of such interleaving layers. There is evidence of obliquely aligned internal waves emanating from the slope at 450-500 m. They follow the predicted trajectory of internal S2 tidal waves and can promote vertical mixing between Atlantic and Arctic-origin waters.

  16. Image and geometry processing with Oriented and Scalable Map.

    PubMed

    Hua, Hao

    2016-05-01

    We turn the Self-organizing Map (SOM) into an Oriented and Scalable Map (OS-Map) by generalizing the neighborhood function and the winner selection. The homogeneous Gaussian neighborhood function is replaced with the matrix exponential. Thus we can specify the orientation either in the map space or in the data space. Moreover, we associate the map's global scale with the locality of winner selection. Our model is suited for a number of graphical applications such as texture/image synthesis, surface parameterization, and solid texture synthesis. OS-Map is more generic and versatile than the task-specific algorithms for these applications. Our work reveals the overlooked strength of SOMs in processing images and geometries.

  17. Analyzing Movements of Tennis Players by Dynamic Image Processing

    NASA Astrophysics Data System (ADS)

    Yazaki, Shunpei; Yamamoto, Osami

    Recently, we can easily obtain a fast computer and a digital video camera as their prices go down. In this paper, we describe a method and a system for analyzing performance of tennis players of a given singles tennis game. We assume that a movie of the game has been shot by a digital video camera which is fixed at a place such that the camera could film whole area of the tennis court and the two players, and that we have the sequence of the images in the computer. Using the positions of the players and the ball detected by techniques of image processing and partially by human assistance, we analyzed the game by defining an evaluation function. We made a prototype of a tennis-player-evaluation system by implementing the method. The system appropriately evaluated performances of players of some sample tennis games, although the evaluation function cannot analyze the player's tactics enough.

  18. The application of image processing software: Photoshop in environmental design

    NASA Astrophysics Data System (ADS)

    Dong, Baohua; Zhang, Chunmi; Zhuo, Chen

    2011-02-01

    In the process of environmental design and creation, the design sketch holds a very important position in that it not only illuminates the design's idea and concept but also shows the design's visual effects to the client. In the field of environmental design, computer aided design has made significant improvement. Many types of specialized design software for environmental performance of the drawings and post artistic processing have been implemented. Additionally, with the use of this software, working efficiency has greatly increased and drawings have become more specific and more specialized. By analyzing the application of photoshop image processing software in environmental design and comparing and contrasting traditional hand drawing and drawing with modern technology, this essay will further explore the way for computer technology to play a bigger role in environmental design.

  19. Positron imaging techniques for process engineering: recent developments at Birmingham

    NASA Astrophysics Data System (ADS)

    Parker, D. J.; Leadbeater, T. W.; Fan, X.; Hausard, M. N.; Ingram, A.; Yang, Z.

    2008-09-01

    For over 20 years the University of Birmingham has been using positron-emitting radioactive tracers to study engineering processes. The imaging technique of positron emission tomography (PET), widely used for medical applications, has been adapted for these studies, and the complementary technique of positron emission particle tracking (PEPT) has been developed. The radioisotopes are produced using the Birmingham MC40 cyclotron, and a variety of techniques are employed to produce suitable tracers in a wide range of forms. Detectors originally designed for medical use have been modified for engineering applications, allowing measurements to be made on real process equipment, at laboratory or pilot plant scale. This paper briefly reviews the capability of the techniques and introduces a few of the many processes to which they have been applied.

  20. Need for constraints in component-separable color image processing

    NASA Astrophysics Data System (ADS)

    Thomas, Bruce A.

    1995-03-01

    The component-wise processing of color image data in performed in a variety of applications. These operations are typically carried out using Lookup Table (LUT) based processing techniques, making them well suited for digital implementation. A general exposition of this type of processing is provided, indicating it's remarkable utility along with some of the practical issues that can arise. These motivate a call for the use of constraints in the types of operators that are used during the construction of LUTs. Several particularly useful classes of constrained operators are identified. These lead to an object-oriented approach generalized to operated in a variety of color spaces. The power of this type of framework is then demonstrated via several novel applications in the HSL color space.

  1. Youpi: A Web-based Astronomical Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Monnerville, M.; Sémah, G.

    2010-12-01

    Youpi stands for “YOUpi is your processing PIpeline”. It is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. It is built on top of open source processing tools that are released to the community by Terapix, in order to organize your data on a computer cluster, to manage your processing jobs in real time and to facilitate teamwork by allowing fine-grain sharing of results and data. On the server side, Youpi is written in the Python programming language and uses the Django web framework. On the client side, Ajax techniques are used along with the Prototype and script.aculo.us Javascript librairies.

  2. Microtomographic imaging in the process of bone modeling and simulation

    NASA Astrophysics Data System (ADS)

    Mueller, Ralph

    1999-09-01

    Micro-computed tomography ((mu) CT) is an emerging technique to nondestructively image and quantify trabecular bone in three dimensions. Where the early implementations of (mu) CT focused more on technical aspects of the systems and required equipment not normally available to the general public, a more recent development emphasized practical aspects of micro- tomographic imaging. That system is based on a compact fan- beam type of tomograph, also referred to as desktop (mu) CT. Desk-top (mu) CT has been used extensively for the investigation of osteoporosis related health problems gaining new insight into the organization of trabecular bone and the influence of osteoporotic bone loss on bone architecture and the competence of bone. Osteoporosis is a condition characterized by excessive bone loss and deterioration in bone architecture. The reduced quality of bone increases the risk of fracture. Current imaging technologies do not allow accurate in vivo measurements of bone structure over several decades or the investigation of the local remodeling stimuli at the tissue level. Therefore, computer simulations and new experimental modeling procedures are necessary for determining the long-term effects of age, menopause, and osteoporosis on bone. Microstructural bone models allow us to study not only the effects of osteoporosis on the skeleton but also to assess and monitor the effectiveness of new treatment regimens. The basis for such approaches are realistic models of bone and a sound understanding of the underlying biological and mechanical processes in bone physiology. In this article, strategies for new approaches to bone modeling and simulation in the study and treatment of osteoporosis and age-related bone loss are presented. The focus is on the bioengineering and imaging aspects of osteoporosis research. With the introduction of desk-top (mu) CT, a new generation of imaging instruments has entered the arena allowing easy and relatively inexpensive access to

  3. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  4. Image processing techniques for detection of buried objects with infrared images

    NASA Astrophysics Data System (ADS)

    Cerón-Correa, Alexander

    2006-01-01

    This document describes the principles of infrared thermography and its application to humanitarian demining in the world as well as the factors influencing its application in a country like Colombia which suffers badly the problem posed by antipersonnel mines. The main factors that affect the images taken by different sensors are: day time, mine size and material, installation angle, object's burial depth, moisture, emissivity, wind, rain, as well as other objects in the proximity shadowing the images. Infrared image processing methods and results of tests done in different sites of the country such as Cartagena, Bogota, and Tolemaida are also shown. Finally, a method for the detection of the presence of a buried object is presented with its successful results.

  5. Optical imaging process based on two-dimensional Fourier transform for synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Sun, Zhiwei; Zhi, Ya'nan; Liu, Liren; Sun, Jianfeng; Zhou, Yu; Hou, Peipei

    2013-09-01

    The synthetic aperture imaging ladar (SAIL) systems typically generate large amounts of data difficult to compress with digital method. This paper presents an optical SAIL processor based on compensation of quadratic phase of echo in azimuth direction and two dimensional Fourier transform. The optical processor mainly consists of one phase-only liquid crystal spatial modulator(LCSLM) to load the phase data of target echo and one cylindrical lens to compensate the quadratic phase and one spherical lens to fulfill the task of two dimensional Fourier transform. We show the imaging processing result of practical target echo obtained by a synthetic aperture imaging ladar demonstrator. The optical processor is compact and lightweight and could provide inherent parallel and the speed-of-light computing capability, it has a promising application future especially in onboard and satellite borne SAIL systems.

  6. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such

  7. Transplant Image Processing Technology under Windows into the Platform Based on MiniGUI

    NASA Astrophysics Data System (ADS)

    Gan, Lan; Zhang, Xu; Lv, Wenya; Yu, Jia

    MFC has a large number of digital image processing-related API functions, object-oriented and class mechanisms which provides image processing technology strong support in Windows. But in embedded systems, image processing technology dues to the restrictions of hardware and software do not have the environment of MFC in Windows. Therefore, this paper draws on the experience of image processing technology of Windows and transplants it into MiniGUI embedded systems. The results show that MiniGUI/Embedded graphical user interface applications about image processing which used in embedded image processing system can be good results.

  8. In-process thermal imaging of the electron beam freeform fabrication process

    NASA Astrophysics Data System (ADS)

    Taminger, Karen M.; Domack, Christopher S.; Zalameda, Joseph N.; Taminger, Brian L.; Hafley, Robert A.; Burke, Eric R.

    2016-05-01

    Researchers at NASA Langley Research Center have been developing the Electron Beam Freeform Fabrication (EBF3) metal additive manufacturing process for the past 15 years. In this process, an electron beam is used as a heat source to create a small molten pool on a substrate into which wire is fed. The electron beam and wire feed assembly are translated with respect to the substrate to follow a predetermined tool path. This process is repeated in a layer-wise fashion to fabricate metal structural components. In-process imaging has been integrated into the EBF3 system using a near-infrared (NIR) camera. The images are processed to provide thermal and spatial measurements that have been incorporated into a closed-loop control system to maintain consistent thermal conditions throughout the build. Other information in the thermal images is being used to assess quality in real time by detecting flaws in prior layers of the deposit. NIR camera incorporation into the system has improved the consistency of the deposited material and provides the potential for real-time flaw detection which, ultimately, could lead to the manufacture of better, more reliable components using this additive manufacturing process.

  9. In-Process Thermal Imaging of the Electron Beam Freeform Fabrication Process

    NASA Technical Reports Server (NTRS)

    Taminger, Karen M.; Domack, Christopher S.; Zalameda, Joseph N.; Taminger, Brian L.; Hafley, Robert A.; Burke, Eric R.

    2016-01-01

    Researchers at NASA Langley Research Center have been developing the Electron Beam Freeform Fabrication (EBF3) metal additive manufacturing process for the past 15 years. In this process, an electron beam is used as a heat source to create a small molten pool on a substrate into which wire is fed. The electron beam and wire feed assembly are translated with respect to the substrate to follow a predetermined tool path. This process is repeated in a layer-wise fashion to fabricate metal structural components. In-process imaging has been integrated into the EBF3 system using a near-infrared (NIR) camera. The images are processed to provide thermal and spatial measurements that have been incorporated into a closed-loop control system to maintain consistent thermal conditions throughout the build. Other information in the thermal images is being used to assess quality in real time by detecting flaws in prior layers of the deposit. NIR camera incorporation into the system has improved the consistency of the deposited material and provides the potential for real-time flaw detection which, ultimately, could lead to the manufacture of better, more reliable components using this additive manufacturing process.

  10. Digital image database processing to simulate image formation in ideal lighting conditions of the human eye

    NASA Astrophysics Data System (ADS)

    Castañeda-Santos, Jessica; Santiago-Alvarado, Agustin; Cruz-Félix, Angel S.; Hernández-Méndez, Arturo

    2015-09-01

    The pupil size of the human eye has a large effect in the image quality due to inherent aberrations. Several studies have been performed to calculate its size relative to the luminance as well as considering other factors, i.e., age, size of the adapting field and mono and binocular vision. Moreover, ideal lighting conditions are known, but software suited to our specific requirements, low cost and low computational consumption, in order to simulate radiation adaptation and image formation in the retina with ideal lighting conditions has not yet been developed. In this work, a database is created consisting of 70 photographs corresponding to the same scene with a fixed target at different times of the day. By using this database, characteristics of the photographs are obtained by measuring the luminance average initial threshold value of each photograph by means of an image histogram. Also, we present the implementation of a digital filter for both, image processing on the threshold values of our database and generating output images with the threshold values reported for the human eye in ideal cases. Some potential applications for this kind of filters may be used in artificial vision systems.

  11. Improved cancer diagnostics by different image processing techniques on OCT images

    NASA Astrophysics Data System (ADS)

    Kanawade, Rajesh; Lengenfelder, Benjamin; Marini Menezes, Tassiana; Hohmann, Martin; Kopfinger, Stefan; Hohmann, Tim; Grabiec, Urszula; Klämpfl, Florian; Gonzales Menezes, Jean; Waldner, Maximilian; Schmidt, Michael

    2015-07-01

    Optical-coherence tomography (OCT) is a promising non-invasive, high-resolution imaging modality which can be used for cancer diagnosis and its therapeutic assessment. However, speckle noise makes detection of cancer boundaries and image segmentation problematic and unreliable. Therefore, to improve the image analysis for a precise cancer border detection, the performance of different image processing algorithms such as mean, median, hybrid median filter and rotational kernel transformation (RKT) for this task is investigated. This is done on OCT images acquired from an ex-vivo human cancerous mucosa and in vitro by using cultivated tumour applied on organotypical hippocampal slice cultures. The preliminary results confirm that the border between the healthy and the cancer lesions can be identified precisely. The obtained results are verified with fluorescence microscopy. This research can improve cancer diagnosis and the detection of borders between healthy and cancerous tissue. Thus, it could also reduce the number of biopsies required during screening endoscopy by providing better guidance to the physician.

  12. Image processing analysis of traditional Gestalt vision experiments

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2002-06-01

    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  13. An image processing application for the localization and segmentation of lymphoblast cell using peripheral blood images.

    PubMed

    Madhloom, Hayan T; Kareem, Sameem Abdul; Ariffin, Hany

    2012-08-01

    An important preliminary step in the diagnosis of leukemia is the visual examination of the patient's peripheral blood smear under the microscope. Morphological changes in the white blood cells can be an indicator of the nature and severity of the disease. Manual techniques are labor intensive, slow, error prone and costly. A computerized system can be used as a supportive tool for the specialist in order to enhance and accelerate the morphological analysis process. This research present a new method that integrates color features with the morphological reconstruction to localize and isolate lymphoblast cells from a microscope image that contains many cells. The localization and segmentation are conducted using a proposed method that consists of an integration of several digital image processing techniques. 180 microscopic blood images were tested, and the proposed framework managed to obtain 100% accuracy for the localization of the lymphoblast cells and separate it from the image scene. The results obtained indicate that the proposed method can be safely used for the purpose of lymphoblast cells localization and segmentation and subsequently, aiding the diagnosis of leukemia.

  14. High-definition image digitising and processing system

    NASA Astrophysics Data System (ADS)

    Rebiai, Mohamed; Pinson, Franck; Michaud, Pierre

    1993-11-01

    The efforts so far deployed by European industry engaged in the development of High Definition Television (HDTV) under EUREKA 95 have been mostly based on broadcast applications. These HDTV applications have also led to developments outside the broadcast domain, notable in high definition still imaging. The workstations resulting from these developments might be boosted by accelerator cards in order to be able to process standard or HD television signals. This evolution will require several developments: workstations much more powerful than today's; ASIC and/or multiprocessor developments with the associated parallel programs, and interfaces with other HD equipment, i.e. HD input, genlock and output cards.

  15. Application of image processing techniques to fluid flow data analysis

    NASA Technical Reports Server (NTRS)

    Giamati, C. C.

    1981-01-01

    The application of color coding techniques used in processing remote sensing imagery to analyze and display fluid flow data is discussed. A minicomputer based color film recording and color CRT display system is described. High quality, high resolution images of two-dimensional data are produced on the film recorder. Three dimensional data, in large volume, are used to generate color motion pictures in which time is used to represent the third dimension. Several applications and examples are presented. System hardware and software is described.

  16. An Independent Workstation For CT Image Processing And Analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Sewchand, Wilfred

    1988-06-01

    This manuscript describes an independent workstation which consists of a data acquisition and transfer system, a host computer, and a display and record system. The main tasks of the workstation include the collecting and managing of a vast amount of data, creating and processing 2-D and 3-D images, conducting quantitative data analysis, and recording and exchanging information. This workstation not only meets the requirements for routine clinical applications, but it is also used extensively for research purposes. It is stand-alone and works as a physician's workstation; moreover, it can be easily linked into a computer-network and serve as a component of PACS (Picture Archiving and Communication System).

  17. A stereo inspecting detection system based on electronic imaging and computer image processing

    NASA Astrophysics Data System (ADS)

    Lu, Dong; Lu, Zhihong; Wang, Aiguo; Cao, Miao

    2006-01-01

    This paper introduces a internal flaws detected system. The internal flaws of metal tube or internal hole of mechanical device, such as crazing, rust-eaten and dropping of electroplated coating. By using xenon-light as a light source through the fiber bundle, the internal surface of tube is illuminated and imaged on the CCD optical receiver by endoscope, Then the direction and the size of the flaws are measured after processed by the imaging grab section and computer processing system. This system also can detect the pose water pipe of the aircrafts, the firebox of the aero engine, turbine and lamina which can not be viewed and detected nearly by human eye. It can measure their redundancy and size of internal flaws.

  18. Geostationary Imaging FTS (GIFTS) Data Processing: Measurement Simulation and Compression

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Revercomb, H. E.; Thom, J.; Antonelli, P. B.; Osborne, B.; Tobin, D.; Knuteson, R.; Garcia, R.; Dutcher, S.; Li, J.

    2001-01-01

    GIFTS (Geostationary Imaging Fourier Transform Spectrometer), a forerunner of next generation geostationary satellite weather observing systems, will be built to fly on the NASA EO-3 geostationary orbit mission in 2004 to demonstrate the use of large area detector arrays and readouts. Timely high spatial resolution images and quantitative soundings of clouds, water vapor, temperature, and pollutants of the atmosphere for weather prediction and air quality monitoring will be achieved. GIFTS is novel in terms of providing many scientific returns that traditionally can only be achieved by separate advanced imaging and sounding systems. GIFTS' ability to obtain half-hourly high vertical density wind over the full earth disk is revolutionary. However, these new technologies bring forth many challenges for data transmission, archiving, and geophysical data processing. In this paper, we will focus on the aspect of data volume and downlink issues by conducting a GIFTS data compression experiment. We will discuss the scenario of using principal component analysis as a foundation for atmospheric data retrieval and compression of uncalibrated and un-normalized interferograms. The effects of compression on the degradation of the signal and noise reduction in interferogram and spectral domains will be highlighted. A simulation system developed to model the GIFTS instrument measurements is described in detail.

  19. High resolution and image processing of otoconia matrix

    NASA Technical Reports Server (NTRS)

    Fermin, C. D.

    1993-01-01

    This study was designed to investigate patterns of fibrils organization in histochemically stained otoconia. Transmission electron microscope and video imaging were used. These data indicate that otoconia of the chick (Gallus domesticus) inner ear may have central cores in vivo. The data also show that the ultrastructural organization of fibrils fixed with aldehydes and histochemical stains follows trajectories that conform to the hexagonal shape of otoconia. These changes in direction may contribute to the formation of a central core. The existence of central cores is important for the in vivo buoyancy of otoconia. Packing of fibrils is tighter after phosphotungstic acid (PTA) stained otoconia than with other histochemical stains, which usually produce looser packing of fibrils and seemingly larger central core. TEM of tilted and untilted material showed that turning of fibrils occurs at the points where the face angles of otoconia form and where central cores exist. Video image processing of the images allowed reconstructing a template which, if assumed to repeat and change trajectories, would fit the pattern of fibrils seen in fixed otoconia. Since it is highly unlikely that aldehyde primary fixation or PTA stain caused such drastic change in the direction of fibrils, the template derived from these results may closely approximate patterns of otoconia fibrils packing in vivo. However, if the above is correct, the perfect crystallographic diffraction pattern of unfixed otoconia do not correspond to patterns of fixed fibrils.

  20. Luminescent Silica Nanoparticles Featuring Collective Processes for Optical Imaging.

    PubMed

    Rampazzo, Enrico; Prodi, Luca; Petrizza, Luca; Zaccheroni, Nelsi

    2016-01-01

    The field of nanoparticles has successfully merged with imaging to optimize contrast agents for many detection techniques. This combination has yielded highly positive results, especially in optical and magnetic imaging, leading to diagnostic methods that are now close to clinical use. Biological sciences have been taking advantage of luminescent labels for many years and the development of luminescent nanoprobes has helped definitively in making the crucial step forward in in vivo applications. To this end, suitable probes should present excitation and emission within the NIR region where tissues have minimal absorbance. Among several nanomaterials engineered with this aim, including noble metal, lanthanide, and carbon nanoparticles and quantum dots, we have focused our attention here on luminescent silica nanoparticles. Many interesting results have already been obtained with nanoparticles containing only one kind of photophysically active moiety. However, the presence of different emitting species in a single nanoparticle can lead to diverse properties including cooperative behaviours. We present here the state of the art in the field of silica luminescent nanoparticles exploiting collective processes to obtain ultra-bright units suitable as contrast agents in optical imaging and optical sensing and for other high sensitivity applications.