Science.gov

Sample records for adaptive image processing

  1. Three-dimensional region-based adaptive image processing techniques for volume visualization applications

    NASA Astrophysics Data System (ADS)

    de Deus Lopes, Roseli; Zuffo, Marcelo K.; Rangayyan, Rangaraj M.

    1996-04-01

    Recent advances in three-dimensional (3D) imaging techniques have expanded the scope of applications of volume visualization to many areas such as medical imaging, scientific visualization, robotic vision, and virtual reality. Advanced image filtering, enhancement, and analysis techniques are being developed in parallel in the field of digital image processing. Although the fields cited have many aspects in common, it appears that many of the latest developments in image processing are not being applied to the fullest extent possible in visualization. It is common to encounter the use of rather simple and elementary image pre- processing operations being used in visualization and 3D imaging applications. The purpose of this paper is to present an overview of selected topics from recent developments in adaptive image processing and demonstrate or suggest their applications in volume visualization. The techniques include adaptive noise removal; improvement of contrast and visibility of objects; space-variant deblurring and restoration; segmentation-based lossless coding for data compression; and perception-based measures for analysis, enhancement, and rendering. The techniques share the common base of identification of adaptive regions by region growing, which lends them a perceptual basis related to the human visual system. Preliminary results obtained with some of the techniques implemented so far are used to illustrate the concepts involved, and to indicate potential performance capabilities of the methods.

  2. The Information Adaptive System - A demonstration of real-time onboard image processing

    NASA Technical Reports Server (NTRS)

    Thomas, G. L.; Carney, P. C.; Meredith, B. D.

    1983-01-01

    The Information Adaptive System (IAS) program has the objective to develop and demonstrate, at the brassboard level, an architecture which can be used to perform advanced signal procesing functions on board the spacecraft. Particular attention is given to the processing of high-speed multispectral imaging data in real-time, and the development of advanced technology which could be employed for future space applications. An IAS functional description is provided, and questions of radiometric correction are examined. Problems of data packetization are considered along with data selection, a distortion coefficient processor, an adaptive system controller, an image processing demonstration system, a sensor simulator and output data buffer, a test support and demonstration controller, and IAS demonstration operating modes.

  3. An adaptive image segmentation process for the classification of lung biopsy images

    NASA Astrophysics Data System (ADS)

    McKee, Daniel W.; Land, Walker H., Jr.; Zhukov, Tatyana; Song, Dansheng; Qian, Wei

    2006-03-01

    The purpose of this study was to develop a computer-based second opinion diagnostic tool that could read microscope images of lung tissue and classify the tissue sample as normal or cancerous. This problem can be broken down into three areas: segmentation, feature extraction and measurement, and classification. We introduce a kernel-based extension of fuzzy c-means to provide a coarse initial segmentation, with heuristically-based mechanisms to improve the accuracy of the segmentation. The segmented image is then processed to extract and quantify features. Finally, the measured features are used by a Support Vector Machine (SVM) to classify the tissue sample. The performance of this approach was tested using a database of 85 images collected at the Moffitt Cancer Center and Research Institute. These images represent a wide variety of normal lung tissue samples, as well as multiple types of lung cancer. When used with a subset of the data containing images from the normal and adenocarcinoma classes, we were able to correctly classify 78% of the images, with a ROC A Z of 0.758.

  4. An adaptive threshold based image processing technique for improved glaucoma detection and classification.

    PubMed

    Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore

    2015-11-01

    Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant. PMID:26321351

  5. Automatic ultrasonic imaging system with adaptive-learning-network signal-processing techniques

    SciTech Connect

    O'Brien, L.J.; Aravanis, N.A.; Gouge, J.R. Jr.; Mucciardi, A.N.; Lemon, D.K.; Skorpik, J.R.

    1982-04-01

    A conventional pulse-echo imaging system has been modified to operate with a linear ultrasonic array and associated digital electronics to collect data from a series of defects fabricated in aircraft quality steel blocks. A thorough analysis of the defect responses recorded with this modified system has shown that considerable improvements over conventional imaging approaches can be obtained in the crucial areas of defect detection and characterization. A combination of advanced signal processing concepts with the Adaptive Learning Network (ALN) methodology forms the basis for these improvements. Use of established signal processing algorithms such as temporal and spatial beam-forming in concert with a sophisticated detector has provided a reliable defect detection scheme which can be implemented in a microprocessor-based system to operate in an automatic mode.

  6. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  7. Ultrasound nondestructive evaluation (NDE) imaging with transducer arrays and adaptive processing.

    PubMed

    Li, Minghui; Hayward, Gordon

    2012-01-01

    This paper addresses the challenging problem of ultrasonic non-destructive evaluation (NDE) imaging with adaptive transducer arrays. In NDE applications, most materials like concrete, stainless steel and carbon-reinforced composites used extensively in industries and civil engineering exhibit heterogeneous internal structure. When inspected using ultrasound, the signals from defects are significantly corrupted by the echoes form randomly distributed scatterers, even defects that are much larger than these random reflectors are difficult to detect with the conventional delay-and-sum operation. We propose to apply adaptive beamforming to the received data samples to reduce the interference and clutter noise. Beamforming is to manipulate the array beam pattern by appropriately weighting the per-element delayed data samples prior to summing them. The adaptive weights are computed from the statistical analysis of the data samples. This delay-weight-and-sum process can be explained as applying a lateral spatial filter to the signals across the probe aperture. Simulations show that the clutter noise is reduced by more than 30 dB and the lateral resolution is enhanced simultaneously when adaptive beamforming is applied. In experiments inspecting a steel block with side-drilled holes, good quantitative agreement with simulation results is demonstrated. PMID:22368457

  8. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    SciTech Connect

    Druckmueller, M.

    2013-08-15

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  9. Analysis of adaptive forward-backward diffusion flows with applications in image processing

    NASA Astrophysics Data System (ADS)

    Surya Prasath, V. B.; Urbano, José Miguel; Vorotnikov, Dmitry

    2015-10-01

    The nonlinear diffusion model introduced by Perona and Malik (1990 IEEE Trans. Pattern Anal. Mach. Intell. 12 629-39) is well suited to preserve salient edges while restoring noisy images. This model overcomes well-known edge smearing effects of the heat equation by using a gradient dependent diffusion function. Despite providing better denoizing results, the analysis of the PM scheme is difficult due to the forward-backward nature of the diffusion flow. We study a related adaptive forward-backward diffusion equation which uses a mollified inverse gradient term engrafted in the diffusion term of a general nonlinear parabolic equation. We prove a series of existence, uniqueness and regularity results for viscosity, weak and dissipative solutions for such forward-backward diffusion flows. In particular, we introduce a novel functional framework for wellposedness of flows of total variation type. A set of synthetic and real image processing examples are used to illustrate the properties and advantages of the proposed adaptive forward-backward diffusion flows.

  10. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    PubMed

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection. PMID:26811073

  11. Application of an adaptive plan to the configuration of nonlinear image-processing algorithms

    NASA Astrophysics Data System (ADS)

    Chu, Chee-Hung H.

    1990-07-01

    The application of an adaptive plan to the design of a class of nonlinear digital image processing operators known as stack filters is presented in this paper. The adaptive plan is based on the mechanics found in genetics and natural selection. Such learning mechanisms have become known as genetic algorithms. A stack filter is characterized by the coefficients of its underlying positive Boolean function. This set of coefficients constitute a binary string, referred to as a chromosome in a genetic algorithm, that represents that particular filter configuration. A fitness value for each chromosome is computed based on the performance of the associated filter in specific tasks such as noise suppression. A population of chromosomes is maintained by the genetic algorithm, and new generations are formed by selecting mating pairs based on their fitness values. Genetic operators such as crossover or mutation are applied to the mating pairs to form offsprings. By exchanging some substrings of the two parent-chromosomes, the crossover operator can bring different blocks of genes that result in good performance together into one chromosome that yields the best performance. Empirical results show that this method is capable of configuring stack filters that are effective in impulsive noise suppression.

  12. Large-scale analysis of high-speed atomic force microscopy data sets using adaptive image processing

    PubMed Central

    Erickson, Blake W; Coquoz, Séverine; Adams, Jonathan D; Burns, Daniel J

    2012-01-01

    Summary Modern high-speed atomic force microscopes generate significant quantities of data in a short amount of time. Each image in the sequence has to be processed quickly and accurately in order to obtain a true representation of the sample and its changes over time. This paper presents an automated, adaptive algorithm for the required processing of AFM images. The algorithm adaptively corrects for both common one-dimensional distortions as well as the most common two-dimensional distortions. This method uses an iterative thresholded processing algorithm for rapid and accurate separation of background and surface topography. This separation prevents artificial bias from topographic features and ensures the best possible coherence between the different images in a sequence. This method is equally applicable to all channels of AFM data, and can process images in seconds. PMID:23213638

  13. Adaptation Duration Dissociates Category-, Image-, and Person-Specific Processes on Face-Evoked Event-Related Potentials

    PubMed Central

    Zimmer, Márta; Zbanţ, Adriana; Németh, Kornél; Kovács, Gyula

    2015-01-01

    Several studies demonstrated that face perception is biased by the prior presentation of another face, a phenomenon termed as face-related after-effect (FAE). FAE is linked to a neural signal-reduction at occipito-temporal areas and it can be observed in the amplitude modulation of the early event-related potential (ERP) components. Recently, macaque single-cell recording studies suggested that manipulating the duration of the adaptor makes the selective adaptation of different visual motion processing steps possible. To date, however, only a few studies tested the effects of adaptor duration on the electrophysiological correlates of human face processing directly. The goal of the current study was to test the effect of adaptor duration on the image-, identity-, and generic category-specific face processing steps. To this end, in a two-alternative forced choice familiarity decision task we used five adaptor durations (ranging from 200–5000 ms) and four adaptor categories: adaptor and test were identical images—Repetition Suppression (RS); adaptor and test were different images of the Same Identity (SameID); adaptor and test images depicted Different Identities (DiffID); the adaptor was a Fourier phase-randomized image (No). Behaviorally, a strong priming effect was observed in both accuracy and response times for RS compared with both DiffID and No. The electrophysiological results suggest that rapid adaptation leads to a category-specific modulation of P100, N170, and N250. In addition, both identity and image-specific processes affected the N250 component during rapid adaptation. On the other hand, prolonged (5000 ms) adaptation enhanced, and extended category-specific adaptation processes over all tested ERP components. Additionally, prolonged adaptation led to the emergence of image-, and identity-specific modulations on the N170 and P2 components as well. In other words, there was a clear dissociation among category, identity-, and image

  14. Adaptive Image Processing Methods for Improving Contaminant Detection Accuracy on Poultry Carcasses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Abstract A real-time multispectral imaging system has demonstrated a science-based tool for fecal and ingesta contaminant detection during poultry processing. In order to implement this imaging system at commercial poultry processing industry, the false positives must be removed. For doi...

  15. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. PMID:25463325

  16. Adaptive optics microscopy enhances image quality in deep layers of CLARITY processed brains of YFP-H mice

    NASA Astrophysics Data System (ADS)

    Reinig, Marc R.; Novack, Samuel W.; Tao, Xiaodong; Ermini, Florian; Bentolila, Laurent A.; Roberts, Dustin G.; MacKenzie-Graham, Allan; Godshalk, S. E.; Raven, M. A.; Kubby, Joel

    2016-03-01

    Optical sectioning of biological tissues has become the method of choice for three-dimensional histological analyses. This is particularly important in the brain were neurons can extend processes over large distances and often whole brain tracing of neuronal processes is desirable. To allow deeper optical penetration, which in fixed tissue is limited by scattering and refractive index mismatching, tissue-clearing procedures such as CLARITY have been developed. CLARITY processed brains have a nearly uniform refractive index and three-dimensional reconstructions at cellular resolution have been published. However, when imaging in deep layers at submicron resolution some limitations caused by residual refractive index mismatching become apparent, as the resulting wavefront aberrations distort the microscopic image. The wavefront can be corrected with adaptive optics. Here, we investigate the wavefront aberrations at different depths in CLARITY processed mouse brains and demonstrate the potential of adaptive optics to enable higher resolution and a better signal-to-noise ratio. Our adaptive optics system achieves high-speed measurement and correction of the wavefront with an open-loop control using a wave front sensor and a deformable mirror. Using adaptive optics enhanced microscopy, we demonstrate improved image quality wavefront, point spread function, and signal to noise in the cortex of YFP-H mice.

  17. Adaptive Sensor Optimization and Cognitive Image Processing Using Autonomous Optical Neuroprocessors

    SciTech Connect

    CAMERON, STEWART M.

    2001-10-01

    Measurement and signal intelligence demands has created new requirements for information management and interoperability as they affect surveillance and situational awareness. Integration of on-board autonomous learning and adaptive control structures within a remote sensing platform architecture would substantially improve the utility of intelligence collection by facilitating real-time optimization of measurement parameters for variable field conditions. A problem faced by conventional digital implementations of intelligent systems is the conflict between a distributed parallel structure on a sequential serial interface functionally degrading bandwidth and response time. In contrast, optically designed networks exhibit the massive parallelism and interconnect density needed to perform complex cognitive functions within a dynamic asynchronous environment. Recently, all-optical self-organizing neural networks exhibiting emergent collective behavior which mimic perception, recognition, association, and contemplative learning have been realized using photorefractive holography in combination with sensory systems for feature maps, threshold decomposition, image enhancement, and nonlinear matched filters. Such hybrid information processors depart from the classical computational paradigm based on analytic rules-based algorithms and instead utilize unsupervised generalization and perceptron-like exploratory or improvisational behaviors to evolve toward optimized solutions. These systems are robust to instrumental systematics or corrupting noise and can enrich knowledge structures by allowing competition between multiple hypotheses. This property enables them to rapidly adapt or self-compensate for dynamic or imprecise conditions which would be unstable using conventional linear control models. By incorporating an intelligent optical neuroprocessor in the back plane of an imaging sensor, a broad class of high-level cognitive image analysis problems including geometric

  18. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593

  19. Adaptive Speckle Imaging Interferometry: a new technique for the analysis of microstructure dynamics, drying processes and coating formation.

    PubMed

    Brunel, L; Brun, A; Snabre, P; Cipelletti, L

    2007-11-12

    We describe an extension of multi-speckle diffusing wave spectroscopy adapted to follow the non-stationary microscopic dynamics in drying films and coatings in a very reactive way and with a high dynamic range. We call this technique "Adaptive Speckle Imaging Interferometry". We introduce an efficient tool, the inter-image distance, to evaluate the speckle dynamics, and the concept of "speckle rate" (SR, in Hz) to quantify this dynamics. The adaptive algorithm plots a simple kinetics, the time evolution of the SR, providing a non-invasive characterization of drying phenomena. A new commercial instrument, called HORUS(R), based on ASII and specialized in the analysis of film formation and drying processes is presented. PMID:19550809

  20. Using adaptive genetic algorithms in the design of morphological filters in textural image processing

    NASA Astrophysics Data System (ADS)

    Li, Wei; Haese-Coat, Veronique; Ronsin, Joseph

    1996-03-01

    An adaptive GA scheme is adopted for the optimal morphological filter design problem. The adaptive crossover and mutation rate which make the GA avoid premature and at the same time assure convergence of the program are successfully used in optimal morphological filter design procedure. In the string coding step, each string (chromosome) is composed of a structuring element coding chain concatenated with a filter sequence coding chain. In decoding step, each string is divided into 3 chains which then are decoded respectively into one structuring element with a size inferior to 5 by 5 and two concatenating morphological filter operators. The fitness function in GA is based on the mean-square-error (MSE) criterion. In string selection step, a stochastic tournament procedure is used to replace the simple roulette wheel program in order to accelerate the convergence. The final convergence of our algorithm is reached by a two step converging strategy. In presented applications of noise removal from texture images, it is found that with the optimized morphological filter sequences, the obtained MSE values are smaller than those using corresponding non-adaptive morphological filters, and the optimized shapes and orientations of structuring elements take approximately the same shapes and orientations as those of the image textons.

  1. Passive adaptive imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Tofsted, David

    2016-05-01

    Standard methods for improved imaging system performance under degrading optical turbulence conditions typically involve active adaptive techniques or post-capture image processing. Here, passive adaptive methods are considered where active sources are disallowed, a priori. Theoretical analyses of short-exposure turbulence impacts indicate that varying aperture sizes experience different degrees of turbulence impacts. Smaller apertures often outperform larger aperture systems as turbulence strength increases. This suggests a controllable aperture system is advantageous. In addition, sub-aperture sampling of a set of training images permits the system to sense tilts in different sub-aperture regions through image acquisition and image cross-correlation calculations. A four sub-aperture pattern supports corrections involving five realizable operating modes (beyond tip and tilt) for removing aberrations over an annular pattern. Progress to date will be discussed regarding development and field trials of a prototype system.

  2. A scale-based forward-and-backward diffusion process for adaptive image enhancement and denoising

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Niu, Ruiqing; Zhang, Liangpei; Wu, Ke; Sahli, Hichem

    2011-12-01

    This work presents a scale-based forward-and-backward diffusion (SFABD) scheme. The main idea of this scheme is to perform local adaptive diffusion using local scale information. To this end, we propose a diffusivity function based on the Minimum Reliable Scale (MRS) of Elder and Zucker (IEEE Trans. Pattern Anal. Mach. Intell. 20(7), 699-716, 1998) to detect the details of local structures. The magnitude of the diffusion coefficient at each pixel is determined by taking into account the local property of the image through the scales. A scale-based variable weight is incorporated into the diffusivity function for balancing the forward and backward diffusion. Furthermore, as numerical scheme, we propose a modification of the Perona-Malik scheme (IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629-639, 1990) by incorporating edge orientations. The article describes the main principles of our method and illustrates image enhancement results on a set of standard images as well as simulated medical images, together with qualitative and quantitative comparisons with a variety of anisotropic diffusion schemes.

  3. Multiresolution stroke sketch adaptive representation and neural network processing system for gray-level image recognition

    NASA Astrophysics Data System (ADS)

    Meystel, Alexander M.; Rybak, Ilya A.; Bhasin, Sanjay

    1992-11-01

    This paper describes a method for multiresolutional representation of gray-level images as hierarchial sets of strokes characterizing forms of objects with different degrees of generalization depending on the context of the image. This method transforms the original image into a hierarchical graph which allows for efficient coding in order to store, retrieve, and recognize the image. The method which is described is based upon finding the resolution levels for each image which minimizes the computations required. This becomes possible because of the use of a special image representation technique called Multiresolutional Attentional Representation for Recognition, based upon a feature which the authors call a stroke. This feature turns out to be efficient in the process of finding the appropriate system of resolutions and construction of the relational graph. Multiresolutional Attentional Representation for Recognition (MARR) is formed by a multi-layer neural network with recurrent inhibitory connections between neurons, the receptive fields of which are selectively tuned to detect the orientation of local contrasts in parts of the image with appropriate degree of generalization. This method simulates the 'coarse-to-fine' algorithm which an artist usually uses, making at attentional sketch of real images. The method, algorithms, and neural network architecture in this system can be used in many machine-vision systems with AI properties; in particular, robotic vision. We expect that systems with MARR can become a component of intelligent control systems for autonomous robots. Their architectures are mostly multiresolutional and match well with the multiple resolutions of the MARR structure.

  4. Real-time atmospheric imaging and processing with hybrid adaptive optics and hardware accelerated lucky-region fusion (LRF) algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jony Jiang; Carhart, Gary W.; Beresnev, Leonid A.; Aubailly, Mathieu; Jackson, Christopher R.; Ejzak, Garrett; Kiamilev, Fouad E.

    2014-09-01

    Atmospheric turbulences can significantly deteriorate the performance of long-range conventional imaging systems and create difficulties for target identification and recognition. Our in-house developed adaptive optics (AO) system, which contains high-performance deformable mirrors (DMs) and the fast stochastic parallel gradient decent (SPGD) control mechanism, allows effective compensation of such turbulence-induced wavefront aberrations and result in significant improvement on the image quality. In addition, we developed advanced digital synthetic imaging and processing technique, "lucky-region" fusion (LRF), to mitigate the image degradation over large field-of-view (FOV). The LRF algorithm extracts sharp regions from each image obtained from a series of short exposure frames and fuses them into a final improved image. We further implemented such algorithm into a VIRTEX-7 field programmable gate array (FPGA) and achieved real-time video processing. Experiments were performed by combining both AO and hardware implemented LRF processing technique over a near-horizontal 2.3km atmospheric propagation path. Our approach can also generate a universal real-time imaging and processing system with a general camera link input, a user controller interface, and a DVI video output.

  5. NASA End-to-End Data System /NEEDS/ information adaptive system - Performing image processing onboard the spacecraft

    NASA Technical Reports Server (NTRS)

    Kelly, W. L.; Howle, W. M.; Meredith, B. D.

    1980-01-01

    The Information Adaptive System (IAS) is an element of the NASA End-to-End Data System (NEEDS) Phase II and is focused toward onbaord image processing. Since the IAS is a data preprocessing system which is closely coupled to the sensor system, it serves as a first step in providing a 'Smart' imaging sensor. Some of the functions planned for the IAS include sensor response nonuniformity correction, geometric correction, data set selection, data formatting, packetization, and adaptive system control. The inclusion of these sensor data preprocessing functions onboard the spacecraft will significantly improve the extraction of information from the sensor data in a timely and cost effective manner and provide the opportunity to design sensor systems which can be reconfigured in near real time for optimum performance. The purpose of this paper is to present the preliminary design of the IAS and the plans for its development.

  6. Imaging an Adapted Dentoalveolar Complex

    PubMed Central

    Herber, Ralf-Peter; Fong, Justine; Lucas, Seth A.; Ho, Sunita P.

    2012-01-01

    Adaptation of a rat dentoalveolar complex was illustrated using various imaging modalities. Micro-X-ray computed tomography for 3D modeling, combined with complementary techniques, including image processing, scanning electron microscopy, fluorochrome labeling, conventional histology (H&E, TRAP), and immunohistochemistry (RANKL, OPN) elucidated the dynamic nature of bone, the periodontal ligament-space, and cementum in the rat periodontium. Tomography and electron microscopy illustrated structural adaptation of calcified tissues at a higher resolution. Ongoing biomineralization was analyzed using fluorochrome labeling, and by evaluating attenuation profiles using virtual sections from 3D tomographies. Osteoclastic distribution as a function of anatomical location was illustrated by combining histology, immunohistochemistry, and tomography. While tomography and SEM provided past resorption-related events, future adaptive changes were deduced by identifying matrix biomolecules using immunohistochemistry. Thus, a dynamic picture of the dentoalveolar complex in rats was illustrated. PMID:22567314

  7. A self-adaptive parameter optimization algorithm in a real-time parallel image processing system.

    PubMed

    Li, Ge; Zhang, Xuehe; Zhao, Jie; Zhang, Hongli; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    Aiming at the stalemate that precision, speed, robustness, and other parameters constrain each other in the parallel processed vision servo system, this paper proposed an adaptive load capacity balance strategy on the servo parameters optimization algorithm (ALBPO) to improve the computing precision and to achieve high detection ratio while not reducing the servo circle. We use load capacity functions (LC) to estimate the load for each processor and then make continuous self-adaptation towards a balanced status based on the fluctuated LC results; meanwhile, we pick up a proper set of target detection and location parameters according to the results of LC. Compared with current load balance algorithm, the algorithm proposed in this paper is proceeded under an unknown informed status about the maximum load and the current load of the processors, which means it has great extensibility. Simulation results showed that the ALBPO algorithm has great merits on load balance performance, realizing the optimization of QoS for each processor, fulfilling the balance requirements of servo circle, precision, and robustness of the parallel processed vision servo system. PMID:24174920

  8. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  9. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  10. Domain adaptation for microscopy imaging.

    PubMed

    Becker, Carlos; Christoudias, C Mario; Fua, Pascal

    2015-05-01

    Electron and light microscopy imaging can now deliver high-quality image stacks of neural structures. However, the amount of human annotation effort required to analyze them remains a major bottleneck. While machine learning algorithms can be used to help automate this process, they require training data, which is time-consuming to obtain manually, especially in image stacks. Furthermore, due to changing experimental conditions, successive stacks often exhibit differences that are severe enough to make it difficult to use a classifier trained for a specific one on another. This means that this tedious annotation process has to be repeated for each new stack. In this paper, we present a domain adaptation algorithm that addresses this issue by effectively leveraging labeled examples across different acquisitions and significantly reducing the annotation requirements. Our approach can handle complex, nonlinear image feature transformations and scales to large microscopy datasets that often involve high-dimensional feature spaces and large 3D data volumes. We evaluate our approach on four challenging electron and light microscopy applications that exhibit very different image modalities and where annotation is very costly. Across all applications we achieve a significant improvement over the state-of-the-art machine learning methods and demonstrate our ability to greatly reduce human annotation effort. PMID:25474809

  11. Seam tracking with adaptive image capture for fine-tuning of a high power laser welding process

    NASA Astrophysics Data System (ADS)

    Lahdenoja, Olli; Säntti, Tero; Laiho, Mika; Paasio, Ari; Poikonen, Jonne K.

    2015-02-01

    This paper presents the development of methods for real-time fine-tuning of a high power laser welding process of thick steel by using a compact smart camera system. When performing welding in butt-joint configuration, the laser beam's location needs to be adjusted exactly according to the seam line in order to allow the injected energy to be absorbed uniformly into both steel sheets. In this paper, on-line extraction of seam parameters is targeted by taking advantage of a combination of dynamic image intensity compression, image segmentation with a focal-plane processor ASIC, and Hough transform on an associated FPGA. Additional filtering of Hough line candidates based on temporal windowing is further applied to reduce unrealistic frame-to-frame tracking variations. The proposed methods are implemented in Matlab by using image data captured with adaptive integration time. The simulations are performed in a hardware oriented way to allow real-time implementation of the algorithms on the smart camera system.

  12. Adaptive compression of image data

    NASA Astrophysics Data System (ADS)

    Hludov, Sergei; Schroeter, Claus; Meinel, Christoph

    1998-09-01

    In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.

  13. Retinal Imaging: Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Goncharov, A. S.; Iroshnikov, N. G.; Larichev, Andrey V.

    This chapter describes several factors influencing the performance of ophthalmic diagnostic systems with adaptive optics compensation of human eye aberration. Particular attention is paid to speckle modulation, temporal behavior of aberrations, and anisoplanatic effects. The implementation of a fundus camera with adaptive optics is considered.

  14. Adaptive Iterative Dose Reduction Using Three Dimensional Processing (AIDR3D) Improves Chest CT Image Quality and Reduces Radiation Exposure

    PubMed Central

    Yamashiro, Tsuneo; Miyara, Tetsuhiro; Honda, Osamu; Kamiya, Hisashi; Murata, Kiyoshi; Ohno, Yoshiharu; Tomiyama, Noriyuki; Moriya, Hiroshi; Koyama, Mitsuhiro; Noma, Satoshi; Kamiya, Ayano; Tanaka, Yuko; Murayama, Sadayuki

    2014-01-01

    Objective To assess the advantages of Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D) for image quality improvement and dose reduction for chest computed tomography (CT). Methods Institutional Review Boards approved this study and informed consent was obtained. Eighty-eight subjects underwent chest CT at five institutions using identical scanners and protocols. During a single visit, each subject was scanned using different tube currents: 240, 120, and 60 mA. Scan data were converted to images using AIDR3D and a conventional reconstruction mode (without AIDR3D). Using a 5-point scale from 1 (non-diagnostic) to 5 (excellent), three blinded observers independently evaluated image quality for three lung zones, four patterns of lung disease (nodule/mass, emphysema, bronchiolitis, and diffuse lung disease), and three mediastinal measurements (small structure visibility, streak artifacts, and shoulder artifacts). Differences in these scores were assessed by Scheffe's test. Results At each tube current, scans using AIDR3D had higher scores than those without AIDR3D, which were significant for lung zones (p<0.0001) and all mediastinal measurements (p<0.01). For lung diseases, significant improvements with AIDR3D were frequently observed at 120 and 60 mA. Scans with AIDR3D at 120 mA had significantly higher scores than those without AIDR3D at 240 mA for lung zones and mediastinal streak artifacts (p<0.0001), and slightly higher or equal scores for all other measurements. Scans with AIDR3D at 60 mA were also judged superior or equivalent to those without AIDR3D at 120 mA. Conclusion For chest CT, AIDR3D provides better image quality and can reduce radiation exposure by 50%. PMID:25153797

  15. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  16. A local adaptive image descriptor

    NASA Astrophysics Data System (ADS)

    Zahid Ishraque, S. M.; Shoyaib, Mohammad; Abdullah-Al-Wadud, M.; Monirul Hoque, Md; Chae, Oksam

    2013-12-01

    The local binary pattern (LBP) is a robust but computationally simple approach in texture analysis. However, LBP performs poorly in the presence of noise and large illumination variation. Thus, a local adaptive image descriptor termed as LAID is introduced in this proposal. It is a ternary pattern and is able to generate persistent codes to represent microtextures in a given image, especially in noisy conditions. It can also generate stable texture codes if the pixel intensities change abruptly due to the illumination changes. Experimental results also show the superiority of the proposed method over other state-of-the-art methods.

  17. Preliminary images from an adaptive imaging system.

    PubMed

    Griffiths, J A; Metaxas, M G; Pani, S; Schulerud, H; Esbrand, C; Royle, G J; Price, B; Rokvic, T; Longo, R; Asimidis, A; Bletsas, E; Cavouras, D; Fant, A; Gasiorek, P; Georgiou, H; Hall, G; Jones, J; Leaver, J; Li, G; Machin, D; Manthos, N; Matheson, J; Noy, M; Ostby, J M; Psomadellis, F; van der Stelt, P F; Theodoridis, S; Triantis, F; Turchetta, R; Venanzi, C; Speller, R D

    2008-06-01

    I-ImaS (Intelligent Imaging Sensors) is a European project aiming to produce real-time adaptive X-ray imaging systems using Monolithic Active Pixel Sensors (MAPS) to create images with maximum diagnostic information within given dose constraints. Initial systems concentrate on mammography and cephalography. In our system, the exposure in each image region is optimised and the beam intensity is a function of tissue thickness and attenuation, and also of local physical and statistical parameters in the image. Using a linear array of detectors, the system will perform on-line analysis of the image during the scan, followed by optimisation of the X-ray intensity to obtain the maximum diagnostic information from the region of interest while minimising exposure of diagnostically less important regions. This paper presents preliminary images obtained with a small area CMOS detector developed for this application. Wedge systems were used to modulate the beam intensity during breast and dental imaging using suitable X-ray spectra. The sensitive imaging area of the sensor is 512 x 32 pixels 32 x 32 microm(2) in size. The sensors' X-ray sensitivity was increased by coupling to a structured CsI(Tl) scintillator. In order to develop the I-ImaS prototype, the on-line data analysis and data acquisition control are based on custom-developed electronics using multiple FPGAs. Images of both breast tissues and jaw samples were acquired and different exposure optimisation algorithms applied. Results are very promising since the average dose has been reduced to around 60% of the dose delivered by conventional imaging systems without decrease in the visibility of details. PMID:18291697

  18. Image Processing

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A new spinoff product was derived from Geospectra Corporation's expertise in processing LANDSAT data in a software package. Called ATOM (for Automatic Topographic Mapping), it's capable of digitally extracting elevation information from stereo photos taken by spaceborne cameras. ATOM offers a new dimension of realism in applications involving terrain simulations, producing extremely precise maps of an area's elevations at a lower cost than traditional methods. ATOM has a number of applications involving defense training simulations and offers utility in architecture, urban planning, forestry, petroleum and mineral exploration.

  19. Digital image processing.

    PubMed

    Seeram, Euclid

    2004-01-01

    Digital image processing is now commonplace in radiology, nuclear medicine and sonography. This article outlines underlying principles and concepts of digital image processing. After completing this article, readers should be able to: List the limitations of film-based imaging. Identify major components of a digital imaging system. Describe the history and application areas of digital image processing. Discuss image representation and the fundamentals of digital image processing. Outline digital image processing techniques and processing operations used in selected imaging modalities. Explain the basic concepts and visualization tools used in 3-D and virtual reality imaging. Recognize medical imaging informatics as a new area of specialization for radiologic technologists. PMID:15352557

  20. Local intensity adaptive image coding

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1989-01-01

    The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.

  1. Adaptive image segmentation by quantization

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Yun, David Y.

    1992-12-01

    Segmentation of images into textural homogeneous regions is a fundamental problem in an image understanding system. Most region-oriented segmentation approaches suffer from the problem of different thresholds selecting for different images. In this paper an adaptive image segmentation based on vector quantization is presented. It automatically segments images without preset thresholds. The approach contains a feature extraction module and a two-layer hierarchical clustering module, a vector quantizer (VQ) implemented by a competitive learning neural network in the first layer. A near-optimal competitive learning algorithm (NOLA) is employed to train the vector quantizer. NOLA combines the advantages of both Kohonen self- organizing feature map (KSFM) and K-means clustering algorithm. After the VQ is trained, the weights of the network and the number of input vectors clustered by each neuron form a 3- D topological feature map with separable hills aggregated by similar vectors. This overcomes the inability to visualize the geometric properties of data in a high-dimensional space for most other clustering algorithms. The second clustering algorithm operates in the feature map instead of the input set itself. Since the number of units in the feature map is much less than the number of feature vectors in the feature set, it is easy to check all peaks and find the `correct' number of clusters, also a key problem in current clustering techniques. In the experiments, we compare our algorithm with K-means clustering method on a variety of images. The results show that our algorithm achieves better performance.

  2. Approach for reconstructing anisoplanatic adaptive optics images.

    PubMed

    Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J

    2007-08-20

    Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366

  3. JPEG 2000 coding of image data over adaptive refinement grids

    NASA Astrophysics Data System (ADS)

    Gamito, Manuel N.; Dias, Miguel S.

    2003-06-01

    An extension of the JPEG 2000 standard is presented for non-conventional images resulting from an adaptive subdivision process. Samples, generated through adaptive subdivision, can have different sizes, depending on the amount of subdivision that was locally introduced in each region of the image. The subdivision principle allows each individual sample to be recursively subdivided into sets of four progressively smaller samples. Image datasets generated through adaptive subdivision find application in Computational Physics where simulations of natural processes are often performed over adaptive grids. It is also found that compression gains can be achieved for non-natural imagery, like text or graphics, if they first undergo an adaptive subdivision process. The representation of adaptive subdivision images is performed by first coding the subdivision structure into the JPEG 2000 bitstream, ina lossless manner, followed by the entropy coded and quantized transform coefficients. Due to the irregular distribution of sample sizes across the image, the wavelet transform must be applied on irregular image subsets that are nested across all the resolution levels. Using the conventional JPEG 2000 coding standard, adaptive subdivision images would first have to be upsampled to the smallest sample size in order to attain a uniform resolution. The proposed method for coding adaptive subdivision images is shown to perform better than conventional JPEG 2000 for medium to high bitrates.

  4. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    SciTech Connect

    Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-08-15

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then

  5. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    PubMed Central

    Keller, Brad M.; Nathan, Diane L.; Wang, Yan; Zheng, Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-01-01

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., “FOR PROCESSING”) and vendor postprocessed (i.e., “FOR PRESENTATION”), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which

  6. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  7. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  8. Adaptive predictive multiplicative autoregressive model for medical image compression.

    PubMed

    Chen, Z D; Chang, R F; Kuo, W J

    1999-02-01

    In this paper, an adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the MAR model with Huffman coding. Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that our method is suitable for reversible medical image compression. PMID:10232675

  9. Image edge detection based on adaptive lifting scheme

    NASA Astrophysics Data System (ADS)

    Xia, Ping; Xiang, Xuejun; Wan, Junli

    2009-10-01

    Image edge is because the gradation is the result of not continuously, is image's information basic characteristic, is also one of hot topics in image processing. This paper analyzes traditional arithmetic of image edge detection and existing problem, uses adaptive lifting wavelet analysis, adaptive adjusts the predict filter and the update filter according to information's partial characteristic, thus realizes the processing information accurate match; at the same time, improves the wavelet edge detection operator, realizes one kind to be suitable for the adaptive lifting scheme image edge detection's algorithm, and applies this method in the medicine image edge detection. The experiment results show that this paper's algorithm is better than the traditional algorithm effect.

  10. Study Of Adaptive-Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Satorius, Edgar H.; Griffiths, Lloyd

    1990-01-01

    Report describes study of adaptive signal-processing techniques for suppression of mutual satellite interference in mobile (on ground)/satellite communication system. Presents analyses and numerical simulations of performances of two approaches to signal processing for suppression of interference. One approach, known as "adaptive side lobe canceling", second called "adaptive temporal processing".

  11. Hyperspectral image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  12. Hyperspectral image processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  13. Hybrid image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    Partly-digital, partly-optical 'hybrid' image processing attempts to use the properties of each domain to synergistic advantage: while Fourier optics furnishes speed, digital processing allows the use of much greater algorithmic complexity. The video-rate image-coordinate transformation used is a critical technology for real-time hybrid image-pattern recognition. Attention is given to the separation of pose variables, image registration, and both single- and multiple-frame registration.

  14. Subroutines For Image Processing

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

    1988-01-01

    Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

  15. Camera lens adapter magnifies image

    NASA Technical Reports Server (NTRS)

    Moffitt, F. L.

    1967-01-01

    Polaroid Land camera with an illuminated 7-power magnifier adapted to the lens, photographs weld flaws. The flaws are located by inspection with a 10-power magnifying glass and then photographed with this device, thus providing immediate pictorial data for use in remedial procedures.

  16. Adaptive enhancement for infrared image using shearlet frame

    NASA Astrophysics Data System (ADS)

    Fan, Zunlin; Bi, Duyan; Gao, Shan; He, Linyuan; Ding, Wenshan

    2016-08-01

    An infrared imaging sensor is sensitive to the variation of imaging environment, which may affect the image quality and blur the edges in an infrared image. Therefore, it is necessary to enhance the infrared image. To improve the image contrast and adaptively enhance image structures, such as edges and details, this paper proposes a novel infrared image enhancement algorithm in the shearlet transform domain. To avoid over-enhancing strong edges and amplifying noise in plateau regions, we linearly enhance the details on the high frequency components based on their structure information, and improve the global image contrast by non-uniform illumination correction on the low frequency component. Then we convert the processed low and high components into the spatial domain to obtain the final enhanced image. Experimental results show that the proposed algorithm could enhance the infrared image details well and produce few noise regions, which is very helpful for target detection and recognition.

  17. Adaptive filtering image preprocessing for smart FPA technology

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey W.

    1995-05-01

    This paper discusses two applications of adaptive filters for image processing on parallel architectures. The first, based on the results of previously accomplished work, summarizes the analyses of various adaptive filters implemented for pixel-level image prediction. FIR filters, fixed and adaptive IIR filters, and various variable step size algorithms were compared with a focus on algorithm complexity against the ability to predict future pixel values. A gaussian smoothing operation with varying spatial and temporal constants were also applied for comparisons of random noise reductions. The second application is a suggestion to use memory-adaptive IIR filters for detecting and tracking motion within an image. Objects within an image are made of edges, or segments, with varying degrees of motion. An application has been previously published that describes FIR filters connecting pixels and using correlations to determine motion and direction. This implementation seems limited to detecting motion coinciding with FIR filter operation rate and the associated harmonics. Upgrading the FIR structures with adaptive IIR structures can eliminate these limitations. These and any other pixel-level adaptive filtering application require data memory for filter parameters and some basic computational capability. Tradeoffs have to be made between chip real estate and these desired features. System tradeoffs will also have to be made as to where it makes the most sense to do which level of processing. Although smart pixels may not be ready to implement adaptive filters, applications such as these should give the smart pixel designer some long range goals.

  18. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  19. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  20. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  1. Image super-resolution based on image adaptive decomposition

    NASA Astrophysics Data System (ADS)

    Xie, Qiwei; Wang, Haiyan; Shen, Lijun; Chen, Xi; Han, Hua

    2011-11-01

    In this paper we propose an image super-resolution algorithm based on Gaussian Mixture Model (GMM) and a new adaptive image decomposition algorithm. The new image decomposition algorithm uses local extreme of image to extract the cartoon and oscillating part of image. In this paper, we first decompose an image into oscillating and piecewise smooth (cartoon) parts, then enlarge the cartoon part with interpolation. Because GMM accurately characterizes the oscillating part, we specify it as the prior distribution and then formulate the image super-resolution problem as a constrained optimization problem to acquire the enlarged texture part and finally we obtain a fine result.

  2. Adaptive optics and phase diversity imaging for responsive space applications.

    SciTech Connect

    Smith, Mark William; Wick, David Victor

    2004-11-01

    The combination of phase diversity and adaptive optics offers great flexibility. Phase diverse images can be used to diagnose aberrations and then provide feedback control to the optics to correct the aberrations. Alternatively, phase diversity can be used to partially compensate for aberrations during post-detection image processing. The adaptive optic can produce simple defocus or more complex types of phase diversity. This report presents an analysis, based on numerical simulations, of the efficiency of different modes of phase diversity with respect to compensating for specific aberrations during post-processing. It also comments on the efficiency of post-processing versus direct aberration correction. The construction of a bench top optical system that uses a membrane mirror as an active optic is described. The results of characterization tests performed on the bench top optical system are presented. The work described in this report was conducted to explore the use of adaptive optics and phase diversity imaging for responsive space applications.

  3. Image space adaptive volume rendering

    NASA Astrophysics Data System (ADS)

    Corcoran, Andrew; Dingliana, John

    2012-01-01

    We present a technique for interactive direct volume rendering which provides adaptive sampling at a reduced memory requirement compared to traditional methods. Our technique exploits frame to frame coherence to quickly generate a two-dimensional importance map of the volume which guides sampling rate optimisation and allows us to provide interactive frame rates for user navigation and transfer function changes. In addition our ray casting shader detects any inconsistencies in our two-dimensional map and corrects them on the fly to ensure correct classification of important areas of the volume.

  4. Adaptive optics imaging of the retina

    PubMed Central

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified. PMID:24492503

  5. Image-Specific Prior Adaptation for Denoising.

    PubMed

    Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z

    2015-12-01

    Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity. PMID:26316129

  6. Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS)

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.

    2006-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) project is creating a record of forest disturbance and regrowth for North America from the Landsat satellite record, in support of the carbon modeling activities. LEDAPS relies on the decadal Landsat GeoCover data set supplemented by dense image time series for selected locations. Imagery is first atmospherically corrected to surface reflectance, and then change detection algorithms are used to extract disturbance area, type, and frequency. Reuse of the MODIS Land processing system (MODAPS) architecture allows rapid throughput of over 2200 MSS, TM, and ETM+ scenes. Initial ("Beta") surface reflectance products are currently available for testing, and initial continental disturbance products will be available by the middle of 2006.

  7. Adaptive prediction trees for image compression.

    PubMed

    Robinson, John A

    2006-08-01

    This paper presents a complete general-purpose method for still-image compression called adaptive prediction trees. Efficient lossy and lossless compression of photographs, graphics, textual, and mixed images is achieved by ordering the data in a multicomponent binary pyramid, applying an empirically optimized nonlinear predictor, exploiting structural redundancies between color components, then coding with hex-trees and adaptive runlength/Huffman coders. Color palettization and order statistics prefiltering are applied adaptively as appropriate. Over a diverse image test set, the method outperforms standard lossless and lossy alternatives. The competing lossy alternatives use block transforms and wavelets in well-studied configurations. A major result of this paper is that predictive coding is a viable and sometimes preferable alternative to these methods. PMID:16900671

  8. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  9. Visual color image processing

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Schaefer, Gerald

    1999-12-01

    In this paper, we propose a color image processing method by combining modern signal processing technique with knowledge about the properties of the human color vision system. Color signals are processed differently according to their visual importance. The emphasis of the technique is on the preservation of total visual quality of the image and simultaneously taking into account computational efficiency. A specific color image enhancement technique, termed Hybrid Vector Median Filtering is presented. Computer simulations have been performed to demonstrate that the new approach is technically sound and results are comparable to or better than traditional methods.

  10. A New Adaptive Image Denoising Method

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    In this paper, a new adaptive image denoising method is proposed that follows the soft-thresholding technique. In our method, a new threshold function is also proposed, which is determined by taking the various combinations of noise level, noise-free signal variance, subband size, and decomposition level. It is simple and adaptive as it depends on the data-driven parameters estimation in each subband. The state-of-the-art denoising methods viz. VisuShrink, SureShrink, BayesShrink, WIDNTF and IDTVWT are not able to modify the coefficients in an efficient manner to provide the good quality of image. Our method removes the noise from the noisy image significantly and provides better visual quality of an image.

  11. Adapting overcomplete wavelet models to natural images

    NASA Astrophysics Data System (ADS)

    Sallee, Phil; Olshausen, Bruno A.

    2003-11-01

    Overcomplete wavelet representations have become increasingly popular for their ability to provide highly sparse and robust descriptions of natural signals. We describe a method for incorporating an overcomplete wavelet representation as part of a statistical model of images which includes a sparse prior distribution over the wavelet coefficients. The wavelet basis functions are parameterized by a small set of 2-D functions. These functions are adapted to maximize the average log-likelihood of the model for a large database of natural images. When adapted to natural images, these functions become selective to different spatial orientations, and they achieve a superior degree of sparsity on natural images as compared with traditional wavelet bases. The learned basis is similar to the Steerable Pyramid basis, and yields slightly higher SNR for the same number of active coefficients. Inference with the learned model is demonstrated for applications such as denoising, with results that compare favorably with other methods.

  12. Investigations in adaptive processing of multispectral data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Horwitz, H. M.

    1973-01-01

    Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.

  13. Meteorological image processing applications

    NASA Technical Reports Server (NTRS)

    Bracken, P. A.; Dalton, J. T.; Hasler, A. F.; Adler, R. F.

    1979-01-01

    Meteorologists at NASA's Goddard Space Flight Center are conducting an extensive program of research in weather and climate related phenomena. This paper focuses on meteorological image processing applications directed toward gaining a detailed understanding of severe weather phenomena. In addition, the paper discusses the ground data handling and image processing systems used at the Goddard Space Flight Center to support severe weather research activities and describes three specific meteorological studies which utilized these facilities.

  14. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  15. Local adaptive filtering of images corrupted by nonstationary noise

    NASA Astrophysics Data System (ADS)

    Lukin, Vladimir V.; Fevralev, Dmitriy V.; Ponomarenko, Nikolay N.; Pogrebnyak, Oleksiy B.; Egiazarian, Karen O.; Astola, Jaakko T.

    2009-02-01

    In various practical situations of remote sensing image processing it is assumed that noise is nonstationary and no a priory information on noise dependence on local mean or about local properties of noise statistics is available. It is shown that in such situations it is difficult to find a proper filter for effective image processing, i.e., for noise removal with simultaneous edge/detail preservation. To deal with such images, a local adaptive filter based on discrete cosine transform in overlapping blocks is proposed. A threshold is set locally based on a noise standard deviation estimate obtained for each block. Several other operations to improve performance of the locally adaptive filter are proposed and studied. The designed filter effectiveness is demonstrated for simulated data as well as for real life radar remote sensing and marine polarimetric radar images.

  16. Improvements for Image Compression Using Adaptive Principal Component Extraction (APEX)

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1997-01-01

    The issues of image compression and pattern classification have been a primary focus of researchers among a variety of fields including signal and image processing, pattern recognition, data classification, etc. These issues depend on finding an efficient representation of the source data. In this paper we collate our earlier results where we introduced the application of the. Hilbe.rt scan to a principal component algorithm (PCA) with Adaptive Principal Component Extraction (APEX) neural network model. We apply these technique to medical imaging, particularly image representation and compression. We apply the Hilbert scan to the APEX algorithm to improve results

  17. Real-time adaptive video image enhancement

    NASA Astrophysics Data System (ADS)

    Garside, John R.; Harrison, Chris G.

    1999-07-01

    As part of a continuing collaboration between the University of Manchester and British Aerospace, a signal processing array has been constructed to demonstrate that it is feasible to compensate a video signal for the degradation caused by atmospheric haze in real-time. Previously reported work has shown good agreement between a simple physical model of light scattering by atmospheric haze and the observed loss of contrast. This model predicts a characteristic relationship between contrast loss in the image and the range from the camera to the scene. For an airborne camera, the slant-range to a point on the ground may be estimated from the airplane's pose, as reported by the inertial navigation system, and the contrast may be obtained from the camera's output. Fusing data from these two streams provides a means of estimating model parameters such as the visibility and the overall illumination of the scene. This knowledge allows the same model to be applied in reverse, thus restoring the contrast lost to atmospheric haze. An efficient approximation of range is vital for a real-time implementation of the method. Preliminary results show that an adaptive approach to fitting the model's parameters, exploiting the temporal correlation between video frames, leads to a robust implementation with a significantly accelerated throughput.

  18. Adaptive contrast imaging: transmit frequency optimization

    NASA Astrophysics Data System (ADS)

    Ménigot, Sébastien; Novell, Anthony; Voicu, Iulian; Bouakaz, Ayache; Girault, Jean-Marc

    2010-01-01

    Introduction: Since the introduction of ultrasound (US) contrast imaging, the imaging systems use a fixed emitting frequency. However it is known that the insonified medium is time-varying and therefore an adapted time-varying excitation is expected. We suggest an adaptive imaging technique which selects the optimal transmit frequency that maximizes the acoustic contrast. Two algorithms have been proposed to find an US excitation for which the frequency was optimal with microbubbles. Methods and Materials: Simulations were carried out for encapsulated microbubbles of 2 microns by considering the modified Rayleigh-Plesset equation for 2 MHz transmit frequency and for various pressure levels (20 kPa up to 420kPa). In vitro experiments were carried out using a transducer operating at 2 MHz and using a programmable waveform generator. Contrast agent was then injected into a small container filled with water. Results and discussions: We show through simulations and in vitro experiments that our adaptive imaging technique gives: 1) in case of simulations, a gain of acoustic contrast which can reach 9 dB compared to the traditional technique without optimization and 2) for in vitro experiments, a gain which can reach 18 dB. There is a non negligible discrepancy between simulations and experiments. These differences are certainly due to the fact that our simulations do not take into account the diffraction and nonlinear propagation effects. Further optimizations are underway.

  19. Onboard image processing

    NASA Technical Reports Server (NTRS)

    Martin, D. R.; Samulon, A. S.

    1979-01-01

    The possibility of onboard geometric correction of Thematic Mapper type imagery to make possible image registration is considered. Typically, image registration is performed by processing raw image data on the ground. The geometric distortion (e.g., due to variation in spacecraft location and viewing angle) is estimated by using a Kalman filter updated by correlating the received data with a small reference subimage, which has known location. Onboard image processing dictates minimizing the complexity of the distortion estimation while offering the advantages of a real time environment. In keeping with this, the distortion estimation can be replaced by information obtained from the Global Positioning System and from advanced star trackers. Although not as accurate as the conventional ground control point technique, this approach is capable of achieving subpixel registration. Appropriate attitude commands can be used in conjunction with image processing to achieve exact overlap of image frames. The magnitude of the various distortion contributions, the accuracy with which they can be measured in real time, and approaches to onboard correction are investigated.

  20. Optical Profilometers Using Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Hall, Gregory A.; Youngquist, Robert; Mikhael, Wasfy

    2006-01-01

    A method of adaptive signal processing has been proposed as the basis of a new generation of interferometric optical profilometers for measuring surfaces. The proposed profilometers would be portable, hand-held units. Sizes could be thus reduced because the adaptive-signal-processing method would make it possible to substitute lower-power coherent light sources (e.g., laser diodes) for white light sources and would eliminate the need for most of the optical components of current white-light profilometers. The adaptive-signal-processing method would make it possible to attain scanning ranges of the order of decimeters in the proposed profilometers.

  1. Image sets for satellite image processing systems

    NASA Astrophysics Data System (ADS)

    Peterson, Michael R.; Horner, Toby; Temple, Asael

    2011-06-01

    The development of novel image processing algorithms requires a diverse and relevant set of training images to ensure the general applicability of such algorithms for their required tasks. Images must be appropriately chosen for the algorithm's intended applications. Image processing algorithms often employ the discrete wavelet transform (DWT) algorithm to provide efficient compression and near-perfect reconstruction of image data. Defense applications often require the transmission of images and video across noisy or low-bandwidth channels. Unfortunately, the DWT algorithm's performance deteriorates in the presence of noise. Evolutionary algorithms are often able to train image filters that outperform DWT filters in noisy environments. Here, we present and evaluate two image sets suitable for the training of such filters for satellite and unmanned aerial vehicle imagery applications. We demonstrate the use of the first image set as a training platform for evolutionary algorithms that optimize discrete wavelet transform (DWT)-based image transform filters for satellite image compression. We evaluate the suitability of each image as a training image during optimization. Each image is ranked according to its suitability as a training image and its difficulty as a test image. The second image set provides a test-bed for holdout validation of trained image filters. These images are used to independently verify that trained filters will provide strong performance on unseen satellite images. Collectively, these image sets are suitable for the development of image processing algorithms for satellite and reconnaissance imagery applications.

  2. Iterative blind deconvolution of adaptive optics images

    NASA Astrophysics Data System (ADS)

    Liang, Ying; Rao, Changhui; Li, Mei; Geng, Zexun

    2006-04-01

    Adaptive optics (AO) technique has been extensively used for large ground-based optical telescopes to overcome the effect of atmospheric turbulence. But the correction is often partial. An iterative blind deconvolution (IBD) algorithm based on maximum-likelihood (ML) method is proposed to restore the details of the object image corrected by AO. IBD algorithm and the procedure are briefly introduced and the experiment results are presented. The results show that IBD algorithm is efficient for the restoration of some useful high-frequency of the image.

  3. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.]. PMID:27548458

  4. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  5. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  6. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  7. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  8. Theory and experimental study on low-light-level images by adaptive mode filter

    NASA Astrophysics Data System (ADS)

    Bai, Lianfa; Zhang, Baomin; Liu, Yunfen; Chen, Qian

    1996-09-01

    Real-time low light level (LLL) image processing technology is the important developmental subject in the area of LLL night vision. But there is an essential distinction between the LLL TV image and ordinary TV image, so the conventional digital image processing technique aren't suitable for LLL image. In this paper, the noise theoretical model of LLL imaging system is described and the LLL image processing system is set up. With regard to the characteristics of LLL image and its noise, a novel noise suppression method, adaptive mode filter, is presented. The experimental results show that the adaptive mode filter can suppress the sharp noise of LLL image effectively, and as for the protection of the image edge, the property of adaptive mode filter is better that of median filter. Finally, the processing results and the conclusions are given.

  9. Implementation of Multispectral Image Classification on a Remote Adaptive Computer

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna

    1999-01-01

    As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).

  10. Perceived Image Quality Improvements from the Application of Image Deconvolution to Retinal Images from an Adaptive Optics Fundus Imager

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Nemeth, S. C.; Erry, G. R. G.; Otten, L. J.; Yang, S. Y.

    Aim: The objective of this project was to apply an image restoration methodology based on wavefront measurements obtained with a Shack-Hartmann sensor and evaluating the restored image quality based on medical criteria.Methods: Implementing an adaptive optics (AO) technique, a fundus imager was used to achieve low-order correction to images of the retina. The high-order correction was provided by deconvolution. A Shack-Hartmann wavefront sensor measures aberrations. The wavefront measurement is the basis for activating a deformable mirror. Image restoration to remove remaining aberrations is achieved by direct deconvolution using the point spread function (PSF) or a blind deconvolution. The PSF is estimated using measured wavefront aberrations. Direct application of classical deconvolution methods such as inverse filtering, Wiener filtering or iterative blind deconvolution (IBD) to the AO retinal images obtained from the adaptive optical imaging system is not satisfactory because of the very large image size, dificulty in modeling the system noise, and inaccuracy in PSF estimation. Our approach combines direct and blind deconvolution to exploit available system information, avoid non-convergence, and time-consuming iterative processes. Results: The deconvolution was applied to human subject data and resulting restored images compared by a trained ophthalmic researcher. Qualitative analysis showed significant improvements. Neovascularization can be visualized with the adaptive optics device that cannot be resolved with the standard fundus camera. The individual nerve fiber bundles are easily resolved as are melanin structures in the choroid. Conclusion: This project demonstrated that computer-enhanced, adaptive optic images have greater detail of anatomical and pathological structures.

  11. Retinomorphic image processing.

    PubMed

    Ghosh, Kuntal; Bhaumik, Kamales; Sarkar, Sandip

    2008-01-01

    The present work is aimed at understanding and explaining some of the aspects of visual signal processing at the retinal level while exploiting the same towards the development of some simple techniques in the domain of digital image processing. Classical studies on retinal physiology revealed the nature of contrast sensitivity of the receptive field of bipolar or ganglion cells, which lie in the outer and inner plexiform layers of the retina. To explain these observations, a difference of Gaussian (DOG) filter was suggested, which was subsequently modified to a Laplacian of Gaussian (LOG) filter for computational ease in handling two-dimensional retinal inputs. Till date almost all image processing algorithms, used in various branches of science and engineering had followed LOG or one of its variants. Recent observations in retinal physiology however, indicate that the retinal ganglion cells receive input from a larger area than the classical receptive fields. We have proposed an isotropic model for the non-classical receptive field of the retinal ganglion cells, corroborated from these recent observations, by introducing higher order derivatives of Gaussian expressed as linear combination of Gaussians only. In digital image processing, this provides a new mechanism of edge detection on one hand and image half-toning on the other. It has also been found that living systems may sometimes prefer to "perceive" the external scenario by adding noise to the received signals in the pre-processing level for arriving at better information on light and shade in the edge map. The proposed model also provides explanation to many brightness-contrast illusions hitherto unexplained not only by the classical isotropic model but also by some other Gestalt and Constructivist models or by non-isotropic multi-scale models. The proposed model is easy to implement both in the analog and digital domain. A scheme for implementation in the analog domain generates a new silicon retina

  12. CAD/CAM-coupled image processing systems

    NASA Astrophysics Data System (ADS)

    Ahlers, Rolf-Juergen; Rauh, W.

    1990-08-01

    Image processing systems have found wide application in industry. For most computer integrated manufacturing faci- lities it is necessary to adapt these systems thus that they can automate the interaction with and the integration of CAD and CAM Systems. In this paper new approaches will be described that make use of the coupling of CAD and image processing as well as the automatic generation of programmes for the machining of products.

  13. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed. PMID:2180633

  14. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  15. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data. PMID:18255433

  16. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1999-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1". The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have successfully used adaptive optics on a 4-m class telescope to obtain 0.1" resolution images of solar system objects in the far red and near infrared (0.7-2.5 microns), aE wavelengths which best discl"lmlnate their spectral signatures. Our efforts have been put into areas of research for which high angular resolution is essential.

  17. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1997-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1 sec. The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have been using adaptive optics (AO) on a 4-m class telescope to obtain 0.1 sec resolution images solar system objects at far red and near infrared wavelengths (0.7-2.5 micron) which best discriminate their spectral signatures. Our efforts has been put into areas of research for which high angular resolution is essential, such as the mapping of Titan and of large asteroids, the dynamics and composition of Neptune stratospheric clouds, the infrared photometry of Pluto, Charon, and close satellites previously undetected from the ground.

  18. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  19. Adaptive Optics Retinal Imaging: Emerging Clinical Applications

    PubMed Central

    Godara, Pooja; Dubis, Adam M.; Roorda, Austin; Duncan, Jacque L.; Carroll, Joseph

    2010-01-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy (SLO) and spectral domain optical coherence tomography (SD-OCT) provide clinicians with remarkably clear pictures of the living retina. While the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, these same optics induce significant aberrations that in most cases obviate cellular-resolution imaging. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. Applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, RPE cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here we review some of the advances made possible with AO imaging of the human retina, and discuss applications and future prospects for clinical imaging. PMID:21057346

  20. Adaptive optics for directly imaging planetary systems

    NASA Astrophysics Data System (ADS)

    Bailey, Vanessa Perry

    In this dissertation I present the results from five papers (including one in preparation) on giant planets, brown dwarfs, and their environments, as well as on the commissioning and optimization of the Adaptive Optics system for the Large Binocular Telescope Interferometer. The first three Chapters cover direct imaging results on several distantly-orbiting planets and brown dwarf companions. The boundary between giant planets and brown dwarf companions in wide orbits is a blurry one. In Chapter 2, I use 3--5 mum imaging of several brown dwarf companions, combined with mid-infrared photometry for each system to constrain the circum-substellar disks around the brown dwarfs. I then use this information to discuss limits on scattering events versus in situ formation. In Chapters 3 and 4, I present results from an adaptive optics imaging survey for giant planets, where the target stars were selected based on the properties of their circumstellar debris disks. Specifically, we targeted systems with debris disks whose SEDs indicated gaps, clearings, or truncations; these features may possibly be sculpted by planets. I discuss in detail one planet-mass companion discovered as part of this survey, HD 106906 b. At a projected separation of 650 AU and weighing in at 11 Jupiter masses, a companion such as this is not a common outcome of any planet or binary star formation model. In the remaining three Chapters, I discuss pre-commissioning, on-sky results, and planned work on the Large Binocular Telescope Interferometer Adaptive Optics system. Before construction of the LBT AO system was complete, I tested a prototype of LBTI's pyramid wavefront sensor unit at the MMT with synthetically-generated calibration files. I present the methodology and MMT on-sky tests in Chapter 5. In Chapter 6, I present the commissioned performance of LBTIAO. Optical imperfections within LBTI limited the quality of the science images, and I describe a simple method to use the adaptive optics system

  1. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  2. Adaptive spectral imager for space-based sensing

    NASA Astrophysics Data System (ADS)

    Vujkovic-Cvijin, Pajo; Goldstein, Neil; Fox, Marsha J.; Higbee, Shawn D.; Becker, Latika S.; Ooi, Teng K.

    2006-05-01

    Optical sensors aboard space vehicles designated to perform seeker functions need to generate multispectral images in the mid-wave infrared (MWIR) and long-wave infrared (LWIR) spectral regions in order to investigate and classify man-made space objects, and to distinguish them relative to the interfering scene clutter. The spectral imager part of the sensor collects spectral signatures of the observed objects in order to extract information on surface emissivity and target temperature, both important parameters for object-discrimination algorithms. The Adaptive Spectral Imager described in this paper fulfills two functions simultaneously: one output produces instantaneous two-dimensional polychromatic imagery for object acquisition and tracking, while the other output produces multispectral images for object discrimination and classification. The spectral and temporal resolution of the data produced by the spectral imager are adjustable in real time, making it possible to achieve optimum tradeoff between different sensing functions to match dynamic monitoring requirements during a mission. The system has high optical collection efficiency, with output data rates limited only by the readout speed of the detector array. The instrument has no macro-scale moving parts, and can be built in a robust, small-volume and lightweight package, suitable for integration with space vehicles. The technology is also applicable to multispectral imaging applications in diverse areas such as surveillance, agriculture, process control, and biomedical imaging, and can be adapted for use in any spectral domain from the ultraviolet (UV) to the LWIR region.

  3. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  4. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  5. Imaging Radio Galaxies with Adaptive Optics

    NASA Astrophysics Data System (ADS)

    de Vries, W. H.; van Breugel, W. J. M.; Quirrenbach, A.; Roberts, J.; Fidkowski, K.

    2000-12-01

    We present 42 milli-arcsecond resolution Adaptive Optics near-infrared images of 3C 452 and 3C 294, two powerful radio galaxies at z=0.081 and z=1.79 respectively, obtained with the NIRSPEC/SCAM+AO instrument on the Keck telescope. The observations provide unprecedented morphological detail of radio galaxy components like nuclear dust-lanes, off-centered or binary nuclei, and merger induced starforming structures; all of which are key features in understanding galaxy formation and the onset of powerful radio emission. Complementary optical HST imaging data are used to construct high resolution color images, which, for the first time, have matching optical and near-IR resolutions. Based on these maps, the extra-nuclear structural morphologies and compositions of both galaxies are discussed. Furthermore, detailed brightness profile analysis of 3C 452 allows a direct comparison to a large literature sample of nearby ellipticals, all of which have been observed in the optical and near-IR by HST. Both the imaging data and the profile information on 3C 452 are consistent with it being a relative diminutive and well-evolved elliptical, in stark contrast to 3C 294 which seems to be in its initial formation throes with an active AGN off-centered from the main body of the galaxy. These results are discussed further within the framework of radio galaxy triggering and the formation of massive ellipticals. The work of WdV and WvB was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. The work at UCSD has been supported by the NSF Science and Technology Center for Adaptive Optics, under agreement No. AST-98-76783.

  6. Adaptive image enhancement of text images that contain touching or broken characters

    SciTech Connect

    Stubberud, P.; Kalluri, V.; Kanai, J.

    1994-11-29

    Text images that contain touching or broken characters can significantly degrade the accuracy of optical character recognition (OCR) systems. This paper proposes an adaptive image restoration technique that can improve OCR accuracy by enhancing touching or broken character images. The technique begins by processing a distorted text image with an OCR system. Using the distorted text image and output information from the OCR system, an inverse model of the distortion that caused the touching or broken character problem is generated. After generating the inverse model, the unrecognized distorted characters are filtered by the inverse model and then processes by the OCR system. To demonstrate its feasibility, six distorted text images were processed using this technique. Four of the text images, two with touching characters and two with broken characters, were synthesized using mathematical distortion models. The remaining two distorted text images, one with touching characters and one with broken characters, were distorted using a photocopier. The performance of the adaptive image restoration technique was measured using pixel accuracy and OCR improvement. The examples demonstrate that this technique can improve both the pixel and OCR accuracy of text images containing touching or broken characters.

  7. Extreme Adaptive Optics Planet Imager: XAOPI

    SciTech Connect

    Macintosh, B A; Graham, J; Poyneer, L; Sommargren, G; Wilhelmsen, J; Gavel, D; Jones, S; Kalas, P; Lloyd, J; Makidon, R; Olivier, S; Palmer, D; Patience, J; Perrin, M; Severson, S; Sheinis, A; Sivaramakrishnan, A; Troy, M; Wallace, K

    2003-09-17

    Ground based adaptive optics is a potentially powerful technique for direct imaging detection of extrasolar planets. Turbulence in the Earth's atmosphere imposes some fundamental limits, but the large size of ground-based telescopes compared to spacecraft can work to mitigate this. We are carrying out a design study for a dedicated ultra-high-contrast system, the eXtreme Adaptive Optics Planet Imager (XAOPI), which could be deployed on an 8-10m telescope in 2007. With a 4096-actuator MEMS deformable mirror it should achieve Strehl >0.9 in the near-IR. Using an innovative spatially filtered wavefront sensor, the system will be optimized to control scattered light over a large radius and suppress artifacts caused by static errors. We predict that it will achieve contrast levels of 10{sup 7}-10{sup 8} at angular separations of 0.2-0.8 inches around a large sample of stars (R<7-10), sufficient to detect Jupiter-like planets through their near-IR emission over a wide range of ages and masses. We are constructing a high-contrast AO testbed to verify key concepts of our system, and present preliminary results here, showing an RMS wavefront error of <1.3 nm with a flat mirror.

  8. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  9. Neural Adaptation Effects in Conceptual Processing

    PubMed Central

    Marino, Barbara F. M.; Borghi, Anna M.; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia

    2015-01-01

    We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view. PMID:26264031

  10. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  11. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability. PMID:25570838

  12. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  13. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-08-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  14. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-06-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  15. An adaptive optics imaging system designed for clinical use.

    PubMed

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R; Rossi, Ethan A

    2015-06-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2-3 arc minutes, (arcmin) 2) ~0.5-0.8 arcmin and, 3) ~0.05-0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3-5 arcmin, 2) ~0.7-1.1 arcmin and 3) ~0.07-0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing. PMID:26114033

  16. An adaptive optics imaging system designed for clinical use

    PubMed Central

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R.; Rossi, Ethan A.

    2015-01-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2–3 arc minutes, (arcmin) 2) ~0.5–0.8 arcmin and, 3) ~0.05–0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3–5 arcmin, 2) ~0.7–1.1 arcmin and 3) ~0.07–0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing. PMID:26114033

  17. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  18. Adaptive Optics Imaging and Spectroscopy of Neptune

    NASA Technical Reports Server (NTRS)

    Johnson, Lindley (Technical Monitor); Sromovsky, Lawrence A.

    2005-01-01

    OBJECTIVES: We proposed to use high spectral resolution imaging and spectroscopy of Neptune in visible and near-IR spectral ranges to advance our understanding of Neptune s cloud structure. We intended to use the adaptive optics (AO) system at Mt. Wilson at visible wavelengths to try to obtain the first groundbased observations of dark spots on Neptune; we intended to use A 0 observations at the IRTF to obtain near-IR R=2000 spatially resolved spectra and near-IR A0 observations at the Keck observatory to obtain the highest spatial resolution studies of cloud feature dynamics and atmospheric motions. Vertical structure of cloud features was to be inferred from the wavelength dependent absorption of methane and hydrogen,

  19. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  20. ASPIC: STARLINK image processing package

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.

    2015-10-01

    ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

  1. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  2. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  3. Adaptation and the color statistics of natural images.

    PubMed

    Webster, M A; Mollon, J D

    1997-12-01

    Color perception depends profoundly on adaptation processes that adjust sensitivity in response to the prevailing pattern of stimulation. We examined how color sensitivity and appearance might be influenced by adaptation to the color distributions characteristic of natural images. Color distributions were measured for natural scenes by sampling an array of locations within each scene with a spectroradiometer, or by recording each scene with a digital camera successively through 31 interference filters. The images were used to reconstruct the L, M and S cone excitation at each spatial location, and the contrasts along three post-receptoral axes [L + M, L - M or S - (L + M)]. Individual scenes varied substantially in their mean chromaticity and luminance, in the principal color-luminance axes of their distributions, and in the range of contrasts in their distributions. Chromatic contrasts were biased along a relatively narrow range of bluish to yellowish-green angles, lying roughly between the S - (L + M) axis (which was more characteristic of scenes with lush vegetation and little sky) and a unique blue-yellow axis (which was more typical of arid scenes). For many scenes L - M and S - (L + M) signals were highly correlated, with weaker correlations between luminance and chromaticity. We use a two-stage model (von Kries scaling followed by decorrelation) to show how the appearance of colors may be altered by light adaptation to the mean of the distributions and by contrast adaptation to the contrast range and principal axes of the distributions; and we show that such adjustments are qualitatively consistent with empirical measurements of asymmetric color matches obtained after adaptation to successive random samples drawn from natural distributions of chromaticities and lightnesses. Such adaptation effects define the natural range of operating states of the visual system. PMID:9425544

  4. Contrast Adaptation Implies Two Spatiotemporal Channels but Three Adapting Processes

    ERIC Educational Resources Information Center

    Langley, Keith; Bex, Peter J.

    2007-01-01

    The contrast gain control model of adaptation predicts that the effects of contrast adaptation correlate with contrast sensitivity. This article reports that the effects of high contrast spatiotemporal adaptors are maximum when adapting around 19 Hz, which is a factor of two or more greater than the peak in contrast sensitivity. To explain the…

  5. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  6. Adaptive Fusion of Stochastic Information for Imaging Fractured Vadose Zones

    NASA Astrophysics Data System (ADS)

    Daniels, J.; Yeh, J.; Illman, W.; Harri, S.; Kruger, A.; Parashar, M.

    2004-12-01

    A stochastic information fusion methodology is developed to assimilate electrical resistivity tomography, high-frequency ground penetrating radar, mid-range-frequency radar, pneumatic/gas tracer tomography, and hydraulic/tracer tomography to image fractures, characterize hydrogeophysical properties, and monitor natural processes in the vadose zone. The information technology research will develop: 1) mechanisms and algorithms for fusion of large data volumes ; 2) parallel adaptive computational engines supporting parallel adaptive algorithms and multi-physics/multi-model computations; 3) adaptive runtime mechanisms for proactive and reactive runtime adaptation and optimization of geophysical and hydrological models of the subsurface; and 4) technologies and infrastructure for remote (pervasive) and collaborative access to computational capabilities for monitoring subsurface processes through interactive visualization tools. The combination of the stochastic fusion approach and information technology can lead to a new level of capability for both hydrologists and geophysicists enabling them to "see" into the earth at greater depths and resolutions than is possible today. Furthermore, the new computing strategies will make high resolution and large-scale hydrological and geophysical modeling feasible for the private sector, scientists, and engineers who are unable to access supercomputers, i.e., an effective paradigm for technology transfer.

  7. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  8. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  9. Coping and adaptation process during puerperium

    PubMed Central

    Muñoz de Rodríguez, Lucy; Ruiz de Cárdenas, Carmen Helena

    2012-01-01

    Introduction: The puerperium is a stage that produces changes and adaptations in women, couples and family. Effective coping, during this stage, depends on the relationship between the demands of stressful or difficult situations and the recourses that the puerperal individual has. Roy (2004), in her Middle Range Theory about the Coping and Adaptation Processing, defines Coping as the ''behavioral and cognitive efforts that a person makes to meet the environment demands''. For the puerperal individual, the correct coping is necessary to maintain her physical and mental well being, especially against situations that can be stressful like breastfeeding and return to work. According to Lazarus and Folkman (1986), a resource for coping is to have someone who receives emotional support, informative and / or tangible. Objective: To review the issue of women coping and adaptation during the puerperium stage and the strategies that enhance this adaptation. Methods: search and selection of database articles: Cochrane, Medline, Ovid, ProQuest, Scielo, and Blackwell Synergy. Other sources: unpublished documents by Roy, published books on Roy´s Model, Websites from of international health organizations. Results: the need to recognize the puerperium as a stage that requires comprehensive care is evident, where nurses must be protagonist with the care offered to women and their families, considering the specific demands of this situation and recourses that promote effective coping and the family, education and health services. PMID:24893059

  10. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  11. Radiological image presentation requires consideration of human adaptation characteristics

    NASA Astrophysics Data System (ADS)

    O'Connell, N. M.; Toomey, R. J.; McEntee, M.; Ryan, J.; Stowe, J.; Adams, A.; Brennan, P. C.

    2008-03-01

    Visualisation of anatomical or pathological image data is highly dependent on the eye's ability to discriminate between image brightnesses and this is best achieved when these data are presented to the viewer at luminance levels to which the eye is adapted. Current ambient light recommendations are often linked to overall monitor luminance but this relies on specific regions of interest matching overall monitor brightness. The current work investigates the luminances of specific regions of interest within three image-types: postero-anterior (PA) chest; PA wrist; computerised tomography (CT) of the head. Luminance levels were measured within the hilar region and peripheral lung distal radius and supra-ventricular grey matter. For each image type average monitor luminances were calculated with a calibrated photometer at ambient light levels of 0, 100 and 400 lux. Thirty samples of each image-type were employed, resulting in a total of over 6,000 measurements. Results demonstrate that average monitor luminances varied from clinically-significant values by up to a factor of 4, 2 and 6 for chest, wrist and CT head images respectively. Values for the thoracic hilum and wrist were higher and for the peripheral lung and CT brain lower than overall monitor levels. The ambient light level had no impact on the results. The results demonstrate that clinically important radiological information for common radiological examinations is not being presented to the viewer in a way that facilitates optimised visual adaptation and subsequent interpretation. The importance of image-processing algorithms focussing on clinically-significant anatomical regions instead of radiographic projections is highlighted.

  12. Image Watermarking Based on Adaptive Models of Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Khawne, Amnach; Hamamoto, Kazuhiko; Chitsobhuk, Orachat

    This paper proposes a digital image watermarking based on adaptive models of human visual perception. The algorithm exploits the local activities estimated from wavelet coefficients of each subband to adaptively control the luminance masking. The adaptive luminance is thus delicately combined with the contrast masking and edge detection and adopted as a visibility threshold. With the proposed combination of adaptive visual sensitivity parameters, the proposed perceptual model can be more appropriate to the different characteristics of various images. The weighting function is chosen such that the fidelity, imperceptibility and robustness could be preserved without making any perceptual difference to the image quality.

  13. SAR imaging via iterative adaptive approach and sparse Bayesian learning

    NASA Astrophysics Data System (ADS)

    Xue, Ming; Santiago, Enrique; Sedehi, Matteo; Tan, Xing; Li, Jian

    2009-05-01

    We consider sidelobe reduction and resolution enhancement in synthetic aperture radar (SAR) imaging via an iterative adaptive approach (IAA) and a sparse Bayesian learning (SBL) method. The nonparametric weighted least squares based IAA algorithm is a robust and user parameter-free adaptive approach originally proposed for array processing. We show that it can be used to form enhanced SAR images as well. SBL has been used as a sparse signal recovery algorithm for compressed sensing. It has been shown in the literature that SBL is easy to use and can recover sparse signals more accurately than the l 1 based optimization approaches, which require delicate choice of the user parameter. We consider using a modified expectation maximization (EM) based SBL algorithm, referred to as SBL-1, which is based on a three-stage hierarchical Bayesian model. SBL-1 is not only more accurate than benchmark SBL algorithms, but also converges faster. SBL-1 is used to further enhance the resolution of the SAR images formed by IAA. Both IAA and SBL-1 are shown to be effective, requiring only a limited number of iterations, and have no need for polar-to-Cartesian interpolation of the SAR collected data. This paper characterizes the achievable performance of these two approaches by processing the complex backscatter data from both a sparse case study and a backhoe vehicle in free space with different aperture sizes.

  14. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  15. Design of Pel Adaptive DPCM coding based upon image partition

    NASA Astrophysics Data System (ADS)

    Saitoh, T.; Harashima, H.; Miyakawa, H.

    1982-01-01

    A Pel Adaptive DPCM coding system based on image partition is developed which possesses coding characteristics superior to those of the Block Adaptive DPCM coding system. This method uses multiple DPCM coding loops and nonhierarchical cluster analysis. It is found that the coding performances of the Pel Adaptive DPCM coding method differ depending on the subject images. The Pel Adaptive DPCM designed using these methods is shown to yield a maximum performance advantage of 2.9 dB for the Girl and Couple images and 1.5 dB for the Aerial image, although no advantage was obtained for the moon image. These results show an improvement over the optimally designed Block Adaptive DPCM coding method proposed by Saito et al. (1981).

  16. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  17. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  18. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  19. Industrial Applications of Image Processing

    NASA Astrophysics Data System (ADS)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  20. An adaptive filtered back-projection for photoacoustic image reconstruction

    PubMed Central

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-01-01

    the correct signal strength of the absorbers. The reconstructed image of the second phantom further demonstrates the capability to form clear images of the spheres with sharp borders in the overlapping geometry. The smallest sphere is clearly visible and distinguishable, even though it is surrounded by two big spheres. In addition, image reconstructions were conducted with randomized noise added to the observed signals to mimic realistic experimental conditions. Conclusions: The authors have developed a new FBP algorithm that is capable for reconstructing high quality images with correct relative intensities and sharp borders for PAT. The results demonstrate that the weighting function serves as a precise ramp filter for processing the observed signals in the Fourier domain. In addition, this algorithm allows an adaptive determination of the cutoff frequency for the applied low pass filter. PMID:25979011

  1. An adaptive filtered back-projection for photoacoustic image reconstruction

    SciTech Connect

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-05-15

    the correct signal strength of the absorbers. The reconstructed image of the second phantom further demonstrates the capability to form clear images of the spheres with sharp borders in the overlapping geometry. The smallest sphere is clearly visible and distinguishable, even though it is surrounded by two big spheres. In addition, image reconstructions were conducted with randomized noise added to the observed signals to mimic realistic experimental conditions. Conclusions: The authors have developed a new FBP algorithm that is capable for reconstructing high quality images with correct relative intensities and sharp borders for PAT. The results demonstrate that the weighting function serves as a precise ramp filter for processing the observed signals in the Fourier domain. In addition, this algorithm allows an adaptive determination of the cutoff frequency for the applied low pass filter.

  2. An image processing algorithm for PPCR imaging

    NASA Astrophysics Data System (ADS)

    Cowen, Arnold R.; Giles, Anthony; Davies, Andrew G.; Workman, A.

    1993-09-01

    During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.

  3. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  4. Studying the star formation process with adaptive optics

    NASA Astrophysics Data System (ADS)

    Menard, Francois; Dougados, Catherine; Duchene, Gaspard; Bouvier, Jerome; Duvert, Gilles; Lavalley, Claudia; Monin, Jean-Louis; Beuzit, Jean-Luc

    2000-07-01

    Young Stellar Objects (YSOs) are the builders of worlds. During its infancy, a star transforms ordinary interstellar dust particles into astronomical gold: planets to say the process is complex, and largely unknown to data. Yet, violent and spectacular events of mass ejection are witnessed, disks in keplerian rotation are detected, multiple stars dancing around each other are found. These are as many traces of the stellar and planet formation process. The high angular resolution provided by adaptive optics, and the related gain in sensitivity, have allowed major breakthrough discoveries to be made in each of these specific fields and our understanding of the various physical processes involved in the formation of a star has leaped forward tremendously over the last few years. In the following, meant as a report of the progress made recently in star formation due to adaptive optics, we will describe new results obtained at optical and near- infrared wavelengths, in imaging and spectroscopic modes. Our images of accretion disks and ionized stellar jets permit direct measurements of many physical parameters and shed light into the physics of the accretion and ejection processes. Although the accretion/ejection process so fundamental to star formation is usually studied around single objects, most of young stars form as part of multiple systems. We also present our findings on how the fraction of stars in binary systems evolves with age. The implications of these results on the conditions under which these stars must have formed are discussed.

  5. Design of smart imagers with image processing

    NASA Astrophysics Data System (ADS)

    Serova, Evgeniya N.; Shiryaev, Yury A.; Udovichenko, Anton O.

    2005-06-01

    This paper is devoted to creation of novel CMOS APS imagers with focal plane parallel image preprocessing for smart technical vision and electro-optical systems based on neural implementation. Using analysis of main biological vision features, the desired artificial vision characteristics are defined. Image processing tasks can be implemented by smart focal plane preprocessing CMOS imagers with neural networks are determined. Eventual results are important for medicine, aerospace ecological monitoring, complexity, and ways for CMOS APS neural nets implementation. To reduce real image preprocessing time special methods based on edge detection and neighbored frame subtraction will be considered and simulated. To select optimal methods and mathematical operators for edge detection various medical, technical and aerospace images will be tested. The important research direction will be devoted to analogue implementation of main preprocessing operations (addition, subtraction, neighbored frame subtraction, module, and edge detection of pixel signals) in focal plane of CMOS APS imagers. We present the following results: the algorithm of edge detection for analog realization, and patented focal plane circuits for analog image reprocessing (edge detection and motion detection).

  6. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  7. Discrete cosine transform-based local adaptive filtering of images corrupted by nonstationary noise

    NASA Astrophysics Data System (ADS)

    Lukin, Vladimir V.; Fevralev, Dmitriy V.; Ponomarenko, Nikolay N.; Abramov, Sergey K.; Pogrebnyak, Oleksiy; Egiazarian, Karen O.; Astola, Jaakko T.

    2010-04-01

    In many image-processing applications, observed images are contaminated by a nonstationary noise and no a priori information on noise dependence on local mean or about local properties of noise statistics is available. In order to remove such a noise, a locally adaptive filter has to be applied. We study a locally adaptive filter based on evaluation of image local activity in a ``blind'' manner and on discrete cosine transform computed in overlapping blocks. Two mechanisms of local adaptation are proposed and applied. The first mechanism takes into account local estimates of noise standard deviation while the second one exploits discrimination of homogeneous and heterogeneous image regions by adaptive threshold setting. The designed filter performance is tested for simulated data as well as for real-life remote-sensing and maritime radar images. Recommendations concerning filter parameter setting are provided. An area of applicability of the proposed filter is defined.

  8. An interactive image processing system.

    PubMed

    Troxel, D E

    1981-01-01

    A multiuser multiprocessing image processing system has been developed. It is an interactive picture manipulation and enhancement facility which is capable of executing a variety of image processing operations while simultaneously controlling real-time input and output of pictures. It was designed to provide a reliable picture processing system which would be cost-effective in the commercial production environment. Additional goals met by the system include flexibility and ease of operation and modification. PMID:21868923

  9. Optimization of exposure in panoramic radiography while maintaining image quality using adaptive filtering.

    PubMed

    Svenson, Björn; Larsson, Lars; Båth, Magnus

    2016-01-01

    Objective The purpose of the present study was to investigate the potential of using advanced external adaptive image processing for maintaining image quality while reducing exposure in dental panoramic storage phosphor plate (SPP) radiography. Materials and methods Thirty-seven SPP radiographs of a skull phantom were acquired using a Scanora panoramic X-ray machine with various tube load, tube voltage, SPP sensitivity and filtration settings. The radiographs were processed using General Operator Processor (GOP) technology. Fifteen dentists, all within the dental radiology field, compared the structural image quality of each radiograph with a reference image on a 5-point rating scale in a visual grading characteristics (VGC) study. The reference image was acquired with the acquisition parameters commonly used in daily operation (70 kVp, 150 mAs and sensitivity class 200) and processed using the standard process parameters supplied by the modality vendor. Results All GOP-processed images with similar (or higher) dose as the reference image resulted in higher image quality than the reference. All GOP-processed images with similar image quality as the reference image were acquired at a lower dose than the reference. This indicates that the external image processing improved the image quality compared with the standard processing. Regarding acquisition parameters, no strong dependency of the image quality on the radiation quality was seen and the image quality was mainly affected by the dose. Conclusions The present study indicates that advanced external adaptive image processing may be beneficial in panoramic radiography for increasing the image quality of SPP radiographs or for reducing the exposure while maintaining image quality. PMID:26478956

  10. Image Processing: Some Challenging Problems

    NASA Astrophysics Data System (ADS)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  11. Mariner 9 - Image processing and products.

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Green, W. B.; Cutts, J. A.; Jahelka, E. D.; Johansen, R. A.; Sander, M. J.; Seidman, J. B.; Young, A. T.; Soderblom, L. A.

    1972-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the different levels of decalibration and analysis.

  12. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  13. Content- and disparity-adaptive stereoscopic image retargeting

    NASA Astrophysics Data System (ADS)

    Yan, Weiqing; Hou, Chunping; Zhou, Yuan; Xiang, Wei

    2016-02-01

    The paper proposes a content- and disparity-adaptive stereoscopic image retargeting. To simultaneously avoid the saliency content and disparity distortion, firstly, we calculate the image saliency region distortion difference, and conclude the factors causing visual distortion. Then, the proposed method via a convex quadratic programming can simultaneously avoid the distortion of the salient region and adjust disparity to a target area, by considering the relationship of the scaling factor of salient region and the disparity scaling factor. The experimental results show that the proposed method is able to successfully adapt the image disparity to the target display screen, while the salient objects remain undistorted in the retargeted stereoscopic image.

  14. Adaptation of photosynthetic processes to stress.

    PubMed

    Berry, J A

    1975-05-01

    I have focused on examples of plant adaptations to environmental conditions that range from adjustments in the allocation of metabolic resources and modification of structural components to entirely separate mechanisms. The result of these modifications is more efficient performance under the stresses typically encountered in the plants' native habitats. Such adaptations, for reasons which are not entirely clear, often lead to poorer performance in other environmental conditions. This situation may be a fundamental basis for the tendency toward specialization among plants native to specific niches or habitats. The evolutionary mechanisms that have resulted in these specializations are very large-scale processes. It seems reasonable to suppose that the plants native to particular habitats are relatively efficient in terms of the limitations imposed by those habitats, and that the adaptive mechanisms these plants possess are, compared to those which have evolved in competing organisms, the most succesful biological means of coping with the environmental stresses encountered. I believe that we can learn from nature and utilize the adaptive mechanisms of these plants in agriculture to replace in part our present reliance on resources and energy to modify the environment for plant growth. By analogy with natural systems, improved resource utilization will require specialization and greater knowledge of the limitations of a particular environment and plant genotype. For example, the cultural conditions, plant architecture, and physiological responses necessary to achieve high water use efficiency from our crop species with C(4) photosynthesis probably differ from those required to achieve maximum total growth. Also, efforts to control water application to eliminate waste carry with them the risk that the crop could be injured by inadequate water. Thus, greater demands would be placed on the crop physiologist, the plant breeder, and the farmer. Planting and appropriate

  15. Image processing of aerodynamic data

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1985-01-01

    The use of digital image processing techniques in analyzing and evaluating aerodynamic data is discussed. An image processing system that converts images derived from digital data or from transparent film into black and white, full color, or false color pictures is described. Applications to black and white images of a model wing with a NACA 64-210 section in simulated rain and to computed low properties for transonic flow past a NACA 0012 airfoil are presented. Image processing techniques are used to visualize the variations of water film thicknesses on the wing model and to illustrate the contours of computed Mach numbers for the flow past the NACA 0012 airfoil. Since the computed data for the NACA 0012 airfoil are available only at discrete spatial locations, an interpolation method is used to provide values of the Mach number over the entire field.

  16. Research on adaptive segmentation and activity classification method of filamentous fungi image in microbe fermentation

    NASA Astrophysics Data System (ADS)

    Cai, Xiaochun; Hu, Yihua; Wang, Peng; Sun, Dujuan; Hu, Guilan

    2009-10-01

    The paper presents an adaptive segmentation and activity classification method for filamentous fungi image. Firstly, an adaptive structuring element (SE) construction algorithm is proposed for image background suppression. Based on watershed transform method, the color labeled segmentation of fungi image is taken. Secondly, the fungi elements feature space is described and the feature set for fungi hyphae activity classification is extracted. The growth rate evaluation of fungi hyphae is achieved by using SVM classifier. Some experimental results demonstrate that the proposed method is effective for filamentous fungi image processing.

  17. Adaptive SVD-Based Digital Image Watermarking

    NASA Astrophysics Data System (ADS)

    Shirvanian, Maliheh; Torkamani Azar, Farah

    Digital data utilization along with the increase popularity of the Internet has facilitated information sharing and distribution. However, such applications have also raised concern about copyright issues and unauthorized modification and distribution of digital data. Digital watermarking techniques which are proposed to solve these problems hide some information in digital media and extract it whenever needed to indicate the data owner. In this paper a new method of image watermarking based on singular value decomposition (SVD) of images is proposed which considers human visual system prior to embedding watermark by segmenting the original image into several blocks of different sizes, with more density in the edges of the image. In this way the original image quality is preserved in the watermarked image. Additional advantages of the proposed technique are large capacity of watermark embedding and robustness of the method against different types of image manipulation techniques.

  18. The Urban Adaptation and Adaptation Process of Urban Migrant Children: A Qualitative Study

    ERIC Educational Resources Information Center

    Liu, Yang; Fang, Xiaoyi; Cai, Rong; Wu, Yang; Zhang, Yaofang

    2009-01-01

    This article employs qualitative research methods to explore the urban adaptation and adaptation processes of Chinese migrant children. Through twenty-one in-depth interviews with migrant children, the researchers discovered: The participant migrant children showed a fairly high level of adaptation to the city; their process of urban adaptation…

  19. Coherent Image Layout using an Adaptive Visual Vocabulary

    SciTech Connect

    Dillard, Scott E.; Henry, Michael J.; Bohn, Shawn J.; Gosink, Luke J.

    2013-03-06

    When querying a huge image database containing millions of images, the result of the query may still contain many thousands of images that need to be presented to the user. We consider the problem of arranging such a large set of images into a visually coherent layout, one that places similar images next to each other. Image similarity is determined using a bag-of-features model, and the layout is constructed from a hierarchical clustering of the image set by mapping an in-order traversal of the hierarchy tree into a space-filling curve. This layout method provides strong locality guarantees so we are able to quantitatively evaluate performance using standard image retrieval benchmarks. Performance of the bag-of-features method is best when the vocabulary is learned on the image set being clustered. Because learning a large, discriminative vocabulary is a computationally demanding task, we present a novel method for efficiently adapting a generic visual vocabulary to a particular dataset. We evaluate our clustering and vocabulary adaptation methods on a variety of image datasets and show that adapting a generic vocabulary to a particular set of images improves performance on both hierarchical clustering and image retrieval tasks.

  20. Body Image Distortion and Exposure to Extreme Body Types: Contingent Adaptation and Cross Adaptation for Self and Other.

    PubMed

    Brooks, Kevin R; Mond, Jonathan M; Stevenson, Richard J; Stephen, Ian D

    2016-01-01

    Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the "thin ideal" has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of "self" and "other" are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one's own body following exposure to "thin" and to "fat" others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size. PMID:27471447

  1. Body Image Distortion and Exposure to Extreme Body Types: Contingent Adaptation and Cross Adaptation for Self and Other

    PubMed Central

    Brooks, Kevin R.; Mond, Jonathan M.; Stevenson, Richard J.; Stephen, Ian D.

    2016-01-01

    Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the “thin ideal” has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of “self” and “other” are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one's own body following exposure to “thin” and to “fat” others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size. PMID:27471447

  2. A dual-modal retinal imaging system with adaptive optics

    PubMed Central

    Meadway, Alexander; Girkin, Christopher A.; Zhang, Yuhua

    2013-01-01

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated. PMID:24514529

  3. An adaptive algorithm for low contrast infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi

    2013-08-01

    An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex

  4. Edge adaptive intra field de-interlacing of video images

    NASA Astrophysics Data System (ADS)

    Lachine, Vladimir; Smith, Gregory; Lee, Louie

    2013-02-01

    Expanding image by an arbitrary scale factor and thereby creating an enlarged image is a crucial image processing operation. De-interlacing is an example of such operation where a video field is enlarged in vertical direction with 1 to 2 scale factor. The most advanced de-interlacing algorithms use a few consequent input fields to generate one output frame. In order to save hardware resources in video processors, missing lines in each field may be generated without reference to the other fields. Line doubling, known as "bobbing", is the simplest intra field de-interlacing method. However, it may generate visual artifacts. For example, interpolation of an inserted line from a few neighboring lines by vertical filter may produce such visual artifacts as "jaggies." In this work we present edge adaptive image up-scaling and/or enhancement algorithm, which can produce "jaggies" free video output frames. As a first step, an edge and its parameters in each interpolated pixel are detected from gradient squared tensor based on local signal variances. Then, according to the edge parameters including orientation, anisotropy and variance strength, the algorithm determines footprint and frequency response of two-dimensional interpolation filter for the output pixel. Filter's coefficients are defined by edge parameters, so that quality of the output frame is controlled by local content. The proposed method may be used for image enlargement or enhancement (for example, anti-aliasing without resampling). It has been hardware implemented in video display processor for intra field de-interlacing of video images.

  5. Next generation high resolution adaptive optics fundus imager

    NASA Astrophysics Data System (ADS)

    Fournier, P.; Erry, G. R. G.; Otten, L. J.; Larichev, A.; Irochnikov, N.

    2005-12-01

    The spatial resolution of retinal images is limited by the presence of static and time-varying aberrations present within the eye. An updated High Resolution Adaptive Optics Fundus Imager (HRAOFI) has been built based on the development from the first prototype unit. This entirely new unit was designed and fabricated to increase opto-mechanical integration and ease-of-use through a new user interface. Improved camera systems for the Shack-Hartmann sensor and for the scene image were implemented to enhance the image quality and the frequency of the Adaptive Optics (AO) control loop. An optimized illumination system that uses specific wavelength bands was applied to increase the specificity of the images. Sample images of clinical trials of retinas, taken with and without the system, are shown. Data on the performance of this system will be presented, demonstrating the ability to calculate near diffraction-limited images.

  6. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  7. Towards Adaptive High-Resolution Images Retrieval Schemes

    NASA Astrophysics Data System (ADS)

    Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.

    2016-06-01

    Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.

  8. Superresolution restoration of an image sequence: adaptive filtering approach.

    PubMed

    Elad, M; Feuer, A

    1999-01-01

    This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented. PMID:18262881

  9. Probing the functions of contextual modulation by adapting images rather than observers.

    PubMed

    Webster, Michael A

    2014-11-01

    Countless visual aftereffects have illustrated how visual sensitivity and perception can be biased by adaptation to the recent temporal context. This contextual modulation has been proposed to serve a variety of functions, but the actual benefits of adaptation remain uncertain. We describe an approach we have recently developed for exploring these benefits by adapting images instead of observers, to simulate how images should appear under theoretically optimal states of adaptation. This allows the long-term consequences of adaptation to be evaluated in ways that are difficult to probe by adapting observers, and provides a common framework for understanding how visual coding changes when the environment or the observer changes, or for evaluating how the effects of temporal context depend on different models of visual coding or the adaptation processes. The approach is illustrated for the specific case of adaptation to color, for which the initial neural coding and adaptation processes are relatively well understood, but can in principle be applied to examine the consequences of adaptation for any stimulus dimension. A simple calibration that adjusts each neuron's sensitivity according to the stimulus level it is exposed to is sufficient to normalize visual coding and generate a host of benefits, from increased efficiency to perceptual constancy to enhanced discrimination. This temporal normalization may also provide an important precursor for the effective operation of contextual mechanisms operating across space or feature dimensions. To the extent that the effects of adaptation can be predicted, images from new environments could be "pre-adapted" to match them to the observer, eliminating the need for observers to adapt. PMID:25281412

  10. Information-Adaptive Image Encoding and Restoration

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well oil the test set.

  11. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging

    PubMed Central

    Cua, Michelle; Wahl, Daniel J.; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J.; Jian, Yifan; Sarunic, Marinko V.

    2016-01-01

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635

  12. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging.

    PubMed

    Cua, Michelle; Wahl, Daniel J; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J; Jian, Yifan; Sarunic, Marinko V

    2016-01-01

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635

  13. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  14. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  15. SPECKLE NOISE SUBTRACTION AND SUPPRESSION WITH ADAPTIVE OPTICS CORONAGRAPHIC IMAGING

    SciTech Connect

    Ren Deqing; Dou Jiangpei; Zhang Xi; Zhu Yongtian

    2012-07-10

    Future ground-based direct imaging of exoplanets depends critically on high-contrast coronagraph and wave-front manipulation. A coronagraph is designed to remove most of the unaberrated starlight. Because of the wave-front error, which is inherit from the atmospheric turbulence from ground observations, a coronagraph cannot deliver its theoretical performance, and speckle noise will limit the high-contrast imaging performance. Recently, extreme adaptive optics, which can deliver an extremely high Strehl ratio, is being developed for such a challenging mission. In this publication, we show that barely taking a long-exposure image does not provide much gain for coronagraphic imaging with adaptive optics. We further discuss a speckle subtraction and suppression technique that fully takes advantage of the high contrast provided by the coronagraph, as well as the wave front corrected by the adaptive optics. This technique works well for coronagraphic imaging with conventional adaptive optics with a moderate Strehl ratio, as well as for extreme adaptive optics with a high Strehl ratio. We show how to substrate and suppress speckle noise efficiently up to the third order, which is critical for future ground-based high-contrast imaging. Numerical simulations are conducted to fully demonstrate this technique.

  16. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  17. Fast-adaptive near-lossless image compression

    NASA Astrophysics Data System (ADS)

    He, Kejing

    2016-05-01

    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  18. Signal and Image Processing Operations

    1995-05-10

    VIEW is a software system for processing arbitrary multidimensional signals. It provides facilities for numerical operations, signal displays, and signal databasing. The major emphasis of the system is on the processing of time-sequences and multidimensional images. The system is designed to be both portable and extensible. It runs currently on UNIX systems, primarily SUN workstations.

  19. In vivo high-resolution retinal imaging using adaptive optics.

    PubMed

    Seyedahmadi, Babak Jian; Vavvas, Demetrios

    2010-01-01

    Retinal imaging with conventional methods is only able to overcome the lowest order of aberration, defocus and astigmatism. The human eye is fraught with higher order of aberrations. Since we are forced to use the human optical system in retinal imaging, the images are degraded. In addition, all of these distortions are constantly changing due to head/eye movement and change in accommodation. Adaptive optics is a promising technology introduced in the field of ophthalmology to measure and compensate for these aberrations. High-resolution obtained by adaptive optics enables us to view and image the retinal photoreceptors, retina pigment epithelium, and identification of cone subclasses in vivo. In this review we will be discussing the basic technology of adaptive optics and hardware requirement in addition to clinical applications of such technology. PMID:21090998

  20. Adaptive entropy coded subband coding of images.

    PubMed

    Kim, Y H; Modestino, J W

    1992-01-01

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138

  1. Content-adaptive ghost imaging of dynamic scenes.

    PubMed

    Li, Ziwei; Suo, Jinli; Hu, Xuemei; Dai, Qionghai

    2016-04-01

    Limited by long acquisition time of 2D ghost imaging, current ghost imaging systems are so far inapplicable for dynamic scenes. However, it's been demonstrated that nature images are spatiotemporally redundant and the redundancy is scene dependent. Inspired by that, we propose a content-adaptive computational ghost imaging approach to achieve high reconstruction quality under a small number of measurements, and thus achieve ghost imaging of dynamic scenes. To utilize content-adaptive inter-frame redundancy, we put the reconstruction under an iterative reweighted optimization, with non-uniform weight computed from temporal-correlated frame sequences. The proposed approach can achieve dynamic imaging at 16fps with 64×64-pixel resolution. PMID:27137022

  2. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  3. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  4. Application of adaptive optics in retinal imaging: a quantitative and clinical comparison with standard cameras

    NASA Astrophysics Data System (ADS)

    Barriga, E. S.; Erry, G.; Yang, S.; Russell, S.; Raman, B.; Soliz, P.

    2005-04-01

    Aim: The objective of this project was to evaluate high resolution images from an adaptive optics retinal imager through comparisons with standard film-based and standard digital fundus imagers. Methods: A clinical prototype adaptive optics fundus imager (AOFI) was used to collect retinal images from subjects with various forms of retinopathy to determine whether improved visibility into the disease could be provided to the clinician. The AOFI achieves low-order correction of aberrations through a closed-loop wavefront sensor and an adaptive optics system. The remaining high-order aberrations are removed by direct deconvolution using the point spread function (PSF) or by blind deconvolution when the PSF is not available. An ophthalmologist compared the AOFI images with standard fundus images and provided a clinical evaluation of all the modalities and processing techniques. All images were also analyzed using a quantitative image quality index. Results: This system has been tested on three human subjects (one normal and two with retinopathy). In the diabetic patient vascular abnormalities were detected with the AOFI that cannot be resolved with the standard fundus camera. Very small features, such as the fine vascular structures on the optic disc and the individual nerve fiber bundles are easily resolved by the AOFI. Conclusion: This project demonstrated that adaptive optic images have great potential in providing clinically significant detail of anatomical and pathological structures to the ophthalmologist.

  5. Reduced beamset adaptive matched field processing

    NASA Astrophysics Data System (ADS)

    Tracey, Brian; Turaga, Srinivas; Lee, Nigel

    2003-04-01

    Matched field processing (MFP) offers the possibility of improved towed array performance at endfire through range/depth discrimination of contacts. One challenge is that arrays with limited vertical aperture can often resolve only a small number of multipath arrivals. This paper explores ways to capture the array resolution by re-parametrizing the set of MFP replicas. A reduced beamset can be created by performing a singular value decomposition on the MFP replica set. Alternatively, clustering techniques can be used to generate MFP cell families, or regions of similar response. These parametrizations are applied to adaptive MFP algorithms to show speed and performance gains. The use of cell families/regions instead of individual MFP cells also provides a framework for increasing the robustness of MFP by defocusing the MFP beamforming operation. The techniques are demonstrated for shallow-water towed array scenarios. [Work sponsored by DARPA-ATO under Air Force Contract No. F19628-00-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the Department of Defense. Approved for Public Release, Distribution Unlimited.

  6. Flood adaptive traits and processes: an overview.

    PubMed

    Voesenek, Laurentius A C J; Bailey-Serres, Julia

    2015-04-01

    Unanticipated flooding challenges plant growth and fitness in natural and agricultural ecosystems. Here we describe mechanisms of developmental plasticity and metabolic modulation that underpin adaptive traits and acclimation responses to waterlogging of root systems and submergence of aerial tissues. This includes insights into processes that enhance ventilation of submerged organs. At the intersection between metabolism and growth, submergence survival strategies have evolved involving an ethylene-driven and gibberellin-enhanced module that regulates growth of submerged organs. Opposing regulation of this pathway is facilitated by a subgroup of ethylene-response transcription factors (ERFs), which include members that require low O₂ or low nitric oxide (NO) conditions for their stabilization. These transcription factors control genes encoding enzymes required for anaerobic metabolism as well as proteins that fine-tune their function in transcription and turnover. Other mechanisms that control metabolism and growth at seed, seedling and mature stages under flooding conditions are reviewed, as well as findings demonstrating that true endurance of submergence includes an ability to restore growth following the deluge. Finally, we highlight molecular insights obtained from natural variation of domesticated and wild species that occupy different hydrological niches, emphasizing the value of understanding natural flooding survival strategies in efforts to stabilize crop yields in flood-prone environments. PMID:25580769

  7. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision. PMID:18285181

  8. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  9. Probing the functions of contextual modulation by adapting images rather than observers

    PubMed Central

    Webster, Michael A.

    2014-01-01

    Countless visual aftereffects have illustrated how visual sensitivity and perception can be biased by adaptation to the recent temporal context. This contextual modulation has been proposed to serve a variety of functions, but the actual benefits of adaptation remain uncertain. We describe an approach we have recently developed for exploring these benefits by adapting images instead of observers, to simulate how images should appear under theoretically optimal states of adaptation. This allows the long-term consequences of adaptation to be evaluated in ways that are difficult to probe by adapting observers, and provides a common framework for understanding how visual coding changes when the environment or the observer changes, or for evaluating how the effects of temporal context depend on different models of visual coding or the adaptation processes. The approach is illustrated for the specific case of adaptation to color, for which the initial neural coding and adaptation processes are relatively well understood, but can in principle be applied to examine the consequences of adaptation for any stimulus dimension. A simple calibration that adjusts each neuron’s sensitivity according to the stimulus level it is exposed to is sufficient to normalize visual coding and generate a host of benefits, from increased efficiency to perceptual constancy to enhanced discrimination. This temporal normalization may also provide an important precursor for the effective operation of contextual mechanisms operating across space or feature dimensions. To the extent that the effects of adaptation can be predicted, images from new environments could be “pre-adapted” to match them to the observer, eliminating the need for observers to adapt. PMID:25281412

  10. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  11. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  12. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  13. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  14. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  15. Contrast-based sensorless adaptive optics for retinal imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525

  16. Seismic Imaging Processing and Migration

    2000-06-26

    Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore » code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less

  17. Adaptive, predictive controller for optimal process control

    SciTech Connect

    Brown, S.K.; Baum, C.C.; Bowling, P.S.; Buescher, K.L.; Hanagandi, V.M.; Hinde, R.F. Jr.; Jones, R.D.; Parkinson, W.J.

    1995-12-01

    One can derive a model for use in a Model Predictive Controller (MPC) from first principles or from experimental data. Until recently, both methods failed for all but the simplest processes. First principles are almost always incomplete and fitting to experimental data fails for dimensions greater than one as well as for non-linear cases. Several authors have suggested the use of a neural network to fit the experimental data to a multi-dimensional and/or non-linear model. Most networks, however, use simple sigmoid functions and backpropagation for fitting. Training of these networks generally requires large amounts of data and, consequently, very long training times. In 1993 we reported on the tuning and optimization of a negative ion source using a special neural network[2]. One of the properties of this network (CNLSnet), a modified radial basis function network, is that it is able to fit data with few basis functions. Another is that its training is linear resulting in guaranteed convergence and rapid training. We found the training to be rapid enough to support real-time control. This work has been extended to incorporate this network into an MPC using the model built by the network for predictive control. This controller has shown some remarkable capabilities in such non-linear applications as continuous stirred exothermic tank reactors and high-purity fractional distillation columns[3]. The controller is able not only to build an appropriate model from operating data but also to thin the network continuously so that the model adapts to changing plant conditions. The controller is discussed as well as its possible use in various of the difficult control problems that face this community.

  18. Imaging-Based Treatment Adaptation in Radiation Oncology.

    PubMed

    Troost, Esther G C; Thorwarth, Daniela; Oyen, Wim J G

    2015-12-01

    In many tumor types, significant effort is being put into patient-tailored adaptation of treatment to improve outcome and preferably reduce toxicity. These opportunities first arose with the introduction of modern irradiation techniques (e.g., intensity-modulated radiotherapy) combined with functional imaging for more precise delineation of target volume. On the basis of functional CT, MRI, and PET results, radiation target volumes are altered during the course of treatment, or subvolumes inside the primary tumor are defined to enhance the dosing strategy. Moreover, the probability of complications to normal tissues is predicted using anatomic or functional imaging, such as in the use of CT or PET to predict radiation pneumonitis. Besides focusing, monitoring, and adapting photon therapy for solid tumors, PET also has a role in verifying proton-beam therapy. This article discusses the current state and remaining challenges of imaging-based treatment adaptation in radiation oncology. PMID:26429959

  19. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  20. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  1. Spatially adaptive migration tomography for multistatic GPR imaging

    DOEpatents

    Paglieroni, David W; Beer, N. Reginald

    2013-08-13

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  2. Adaptation of web pages and images for mobile applications

    NASA Astrophysics Data System (ADS)

    Kopf, Stephan; Guthier, Benjamin; Lemelson, Hendrik; Effelsberg, Wolfgang

    2009-02-01

    In this paper, we introduce our new visualization service which presents web pages and images on arbitrary devices with differing display resolutions. We analyze the layout of a web page and simplify its structure and formatting rules. The small screen of a mobile device is used much better this way. Our new image adaptation service combines several techniques. In a first step, border regions which do not contain relevant semantic content are identified. Cropping is used to remove these regions. Attention objects are identified in a second step. We use face detection, text detection and contrast based saliency maps to identify these objects and combine them into a region of interest. Optionally, the seam carving technique can be used to remove inner parts of an image. Additionally, we have developed a software tool to validate, add, delete, or modify all automatically extracted data. This tool also simulates different mobile devices, so that the user gets a feeling of how an adapted web page will look like. We have performed user studies to evaluate our web and image adaptation approach. Questions regarding software ergonomics, quality of the adapted content, and perceived benefit of the adaptation were asked.

  3. Registration of adaptive optics corrected retinal nerve fiber layer (RNFL) images

    PubMed Central

    Ramaswamy, Gomathy; Lombardo, Marco; Devaney, Nicholas

    2014-01-01

    Glaucoma is the leading cause of preventable blindness in the western world. Investigation of high-resolution retinal nerve fiber layer (RNFL) images in patients may lead to new indicators of its onset. Adaptive optics (AO) can provide diffraction-limited images of the retina, providing new opportunities for earlier detection of neuroretinal pathologies. However, precise processing is required to correct for three effects in sequences of AO-assisted, flood-illumination images: uneven illumination, residual image motion and image rotation. This processing can be challenging for images of the RNFL due to their low contrast and lack of clearly noticeable features. Here we develop specific processing techniques and show that their application leads to improved image quality on the nerve fiber bundles. This in turn improves the reliability of measures of fiber texture such as the correlation of Gray-Level Co-occurrence Matrix (GLCM). PMID:24940551

  4. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  5. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  6. Adaptive polyphase subband decomposition structures for image compression.

    PubMed

    Gerek, O N; Cetin, A E

    2000-01-01

    Subband decomposition techniques have been extensively used for data coding and analysis. In most filter banks, the goal is to obtain subsampled signals corresponding to different spectral regions of the original data. However, this approach leads to various artifacts in images having spatially varying characteristics, such as images containing text, subtitles, or sharp edges. In this paper, adaptive filter banks with perfect reconstruction property are presented for such images. The filters of the decomposition structure which can be either linear or nonlinear vary according to the nature of the signal. This leads to improved image compression ratios. Simulation examples are presented. PMID:18262904

  7. Performance of the Gemini Planet Imager's adaptive optics system.

    PubMed

    Poyneer, Lisa A; Palmer, David W; Macintosh, Bruce; Savransky, Dmitry; Sadakuni, Naru; Thomas, Sandrine; Véran, Jean-Pierre; Follette, Katherine B; Greenbaum, Alexandra Z; Ammons, S Mark; Bailey, Vanessa P; Bauman, Brian; Cardwell, Andrew; Dillon, Daren; Gavel, Donald; Hartung, Markus; Hibon, Pascale; Perrin, Marshall D; Rantakyrö, Fredrik T; Sivaramakrishnan, Anand; Wang, Jason J

    2016-01-10

    The Gemini Planet Imager's adaptive optics (AO) subsystem was designed specifically to facilitate high-contrast imaging. A definitive description of the system's algorithms and technologies as built is given. 564 AO telemetry measurements from the Gemini Planet Imager Exoplanet Survey campaign are analyzed. The modal gain optimizer tracks changes in atmospheric conditions. Science observations show that image quality can be improved with the use of both the spatially filtered wavefront sensor and linear-quadratic-Gaussian control of vibration. The error budget indicates that for all targets and atmospheric conditions AO bandwidth error is the largest term. PMID:26835769

  8. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  9. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  10. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  11. Guided Adaptive Image Smoothing via Directional Anisotropic Structure Measurement.

    PubMed

    Zang, Yu; Huang, Hua; Zhang, Lei

    2015-09-01

    Image smoothing prefers a good metric to identify dominant structures from textures adaptive of intensity contrast. In this paper, we drop on a novel directional anisotropic structure measurement (DASM) toward adaptive image smoothing. With observations on psychological perception regarding anisotropy, non-periodicity and local directionality, DASM can well characterize structures and textures independent on their contrast scales. By using such measurement as constraint, we design a guided adaptive image smoothing scheme by improving extrema localization and envelopes construction in a structure-aware manner. Our approach can well suppresses the staircase-like artifacts and blur of structures that appear in previous methods, which better suits structure-preserving image smoothing task. The algorithm is performed on a space-filling curve as the reduced domain, so it is very fast and much easy to implement in practice. We make comprehensive comparisons with previous state-of-the-art methods for a variety of applications. Experimental results demonstrate the merit using our DASM as metric to identify structures, and the effectiveness and efficiency of our adaptive image smoothing approach to produce commendable results. PMID:26357284

  12. Adaptive deformable image registration of inhomogeneous tissues

    NASA Astrophysics Data System (ADS)

    Ren, Jing

    2015-03-01

    Physics based deformable registration can provide physically consistent image match of deformable soft tissues. In order to help radiologist/surgeons to determine the status of malicious tumors, we often need to accurately align the regions with embedded tumors. This is a very challenging task since the tumor and the surrounding tissues have very different tissue properties such as stiffness and elasticity. In order to address this problem, based on minimum strain energy principle in elasticity theory, we propose to partition the whole region of interest into smaller sub-regions and dynamically adjust weights of vessel segments and bifurcation points in each sub-region in the registration objective function. Our previously proposed fast vessel registration is used as a component in the inner loop. We have validated the proposed method using liver MR images from human subjects. The results show that our method can detect the large registration errors and improve the registration accuracy in the neighborhood of the tumors and guarantee the registration errors to be within acceptable accuracy. The proposed technique has the potential to significantly improve the registration capability and the quality of clinical diagnosis and treatment planning.

  13. Adaptive Optics Technology for High-Resolution Retinal Imaging

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

    2013-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging. PMID:23271600

  14. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  15. Multistatic adaptive microwave imaging for early breast cancer detection.

    PubMed

    Xie, Yao; Guo, Bin; Xu, Luzhou; Li, Jian; Stoica, Petre

    2006-08-01

    We propose a new multistatic adaptive microwave imaging (MAMI) method for early breast cancer detection. MAMI is a two-stage robust Capon beamforming (RCB) based image formation algorithm. MAMI exhibits higher resolution, lower sidelobes, and better noise and interference rejection capabilities than the existing approaches. The effectiveness of using MAMI for breast cancer detection is demonstrated via a simulated 3-D breast model and several numerical examples. PMID:16916099

  16. Adaptation Processes in Chinese: Word Formation.

    ERIC Educational Resources Information Center

    Pasierbsky, Fritz

    The typical pattern of Chinese word formation is to have native material adapt to changed circumstances. The Chinese language neither borrows nor lends words, but it does occasionally borrow concepts. The larger cultural pattern in which this occurs is that the Chinese culture borrows, if necessary, but ensures that the act of borrowing does not…

  17. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  18. A systematic process for adaptive concept exploration

    NASA Astrophysics Data System (ADS)

    Nixon, Janel Nicole

    several common challenges to the creation of quantitative modeling and simulation environments. Namely, a greater number of alternative solutions imply a greater number of design variables as well as larger ranges on those variables. This translates to a high-dimension combinatorial problem. As the size and dimensionality of the solution space gets larger, the number of physically impossible solutions within that space greatly increases. Thus, the ratio of feasible design space to infeasible space decreases, making it much harder to not only obtain a good quantitative sample of the space, but to also make sense of that data. This is especially the case in the early stages of design, where it is not practical to dedicate a great deal of resources to performing thorough, high-fidelity analyses on all the potential solutions. To make quantitative analyses feasible in these early stages of design, a method is needed that allows for a relatively sparse set of information to be collected quickly and efficiently, and yet, that information needs to be meaningful enough with which to base a decision. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The SPACE method uses a four-part sampling scheme to efficiently uncover the parametric relationships between the design variables and responses. Step 1 aims to identify the location of infeasible space within the region of interest using an initial

  19. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  20. eXtreme Adaptive Optics Planet Imager: Overview and status

    SciTech Connect

    Macintosh, B A; Bauman, B; Evans, J W; Graham, J; Lockwood, C; Poyneer, L; Dillon, D; Gavel, D; Green, J; Lloyd, J; Makidon, R; Olivier, S; Palmer, D; Perrin, M; Severson, S; Sheinis, A; Sivaramakrishnan, A; Sommargren, G; Soumer, R; Troy, M; Wallace, K; Wishnow, E

    2004-08-18

    As adaptive optics (AO) matures, it becomes possible to envision AO systems oriented towards specific important scientific goals rather than general-purpose systems. One such goal for the next decade is the direct imaging detection of extrasolar planets. An 'extreme' adaptive optics (ExAO) system optimized for extrasolar planet detection will have very high actuator counts and rapid update rates - designed for observations of bright stars - and will require exquisite internal calibration at the nanometer level. In addition to extrasolar planet detection, such a system will be capable of characterizing dust disks around young or mature stars, outflows from evolved stars, and high Strehl ratio imaging even at visible wavelengths. The NSF Center for Adaptive Optics has carried out a detailed conceptual design study for such an instrument, dubbed the eXtreme Adaptive Optics Planet Imager or XAOPI. XAOPI is a 4096-actuator AO system, notionally for the Keck telescope, capable of achieving contrast ratios >10{sup 7} at angular separations of 0.2-1'. ExAO system performance analysis is quite different than conventional AO systems - the spatial and temporal frequency content of wavefront error sources is as critical as their magnitude. We present here an overview of the XAOPI project, and an error budget highlighting the key areas determining achievable contrast. The most challenging requirement is for residual static errors to be less than 2 nm over the controlled range of spatial frequencies. If this can be achieved, direct imaging of extrasolar planets will be feasible within this decade.

  1. Colored adaptive compressed imaging with a single photodiode.

    PubMed

    Yan, Yiyun; Dai, Huidong; Liu, Xingjiong; He, Weiji; Chen, Qian; Gu, Guohua

    2016-05-10

    Computational ghost imaging is commonly used to reconstruct grayscale images. Currently, however, there is little research aimed at reconstructing color images. In this paper, we theoretically and experimentally demonstrate a colored adaptive compressed imaging method. Benefiting from imaging in YUV color space, the proposed method adequately exploits the sparsity of the U, V components in the wavelet domain, the interdependence between luminance and chrominance, and human visual characteristics. The simulation and experimental results show that our method greatly reduces the measurements required and offers better image quality compared to recovering the red (R), green (G), and blue (B) components separately in RGB color space. As the application of a single photodiode increases, our method shows great potential in many fields. PMID:27168280

  2. High-resolution adaptive imaging with a single photodiode

    NASA Astrophysics Data System (ADS)

    Soldevila, F.; Salvador-Balaguer, E.; Clemente, P.; Tajahuerce, E.; Lancis, J.

    2015-09-01

    During the past few years, the emergence of spatial light modulators operating at the tens of kHz has enabled new imaging modalities based on single-pixel photodetectors. The nature of single-pixel imaging enforces a reciprocal relationship between frame rate and image size. Compressive imaging methods allow images to be reconstructed from a number of projections that is only a fraction of the number of pixels. In microscopy, single-pixel imaging is capable of producing images with a moderate size of 128 × 128 pixels at frame rates under one Hz. Recently, there has been considerable interest in the development of advanced techniques for high-resolution real-time operation in applications such as biological microscopy. Here, we introduce an adaptive compressive technique based on wavelet trees within this framework. In our adaptive approach, the resolution of the projecting patterns remains deliberately small, which is crucial to avoid the demanding memory requirements of compressive sensing algorithms. At pattern projection rates of 22.7 kHz, our technique would enable to obtain 128 × 128 pixel images at frame rates around 3 Hz. In our experiments, we have demonstrated a cost-effective solution employing a commercial projection display.

  3. High-resolution adaptive imaging with a single photodiode

    PubMed Central

    Soldevila, F.; Salvador-Balaguer, E.; Clemente, P.; Tajahuerce, E.; Lancis, J.

    2015-01-01

    During the past few years, the emergence of spatial light modulators operating at the tens of kHz has enabled new imaging modalities based on single-pixel photodetectors. The nature of single-pixel imaging enforces a reciprocal relationship between frame rate and image size. Compressive imaging methods allow images to be reconstructed from a number of projections that is only a fraction of the number of pixels. In microscopy, single-pixel imaging is capable of producing images with a moderate size of 128 × 128 pixels at frame rates under one Hz. Recently, there has been considerable interest in the development of advanced techniques for high-resolution real-time operation in applications such as biological microscopy. Here, we introduce an adaptive compressive technique based on wavelet trees within this framework. In our adaptive approach, the resolution of the projecting patterns remains deliberately small, which is crucial to avoid the demanding memory requirements of compressive sensing algorithms. At pattern projection rates of 22.7 kHz, our technique would enable to obtain 128 × 128 pixel images at frame rates around 3 Hz. In our experiments, we have demonstrated a cost-effective solution employing a commercial projection display. PMID:26382114

  4. Augmenting synthetic aperture radar with space time adaptive processing

    NASA Astrophysics Data System (ADS)

    Riedl, Michael; Potter, Lee C.; Ertin, Emre

    2013-05-01

    Wide-area persistent radar video offers the ability to track moving targets. A shortcoming of the current technology is an inability to maintain track when Doppler shift places moving target returns co-located with strong clutter. Further, the high down-link data rate required for wide-area imaging presents a stringent system bottleneck. We present a multi-channel approach to augment the synthetic aperture radar (SAR) modality with space time adaptive processing (STAP) while constraining the down-link data rate to that of a single antenna SAR system. To this end, we adopt a multiple transmit, single receive (MISO) architecture. A frequency division design for orthogonal transmit waveforms is presented; the approach maintains coherence on clutter, achieves the maximal unaliased band of radial velocities, retains full resolution SAR images, and requires no increase in receiver data rate vis-a-vis the wide-area SAR modality. For Nt transmit antennas and N samples per pulse, the enhanced sensing provides a STAP capability with Nt times larger range bins than the SAR mode, at the cost of O(log N) more computations per pulse. The proposed MISO system and the associated signal processing are detailed, and the approach is numerically demonstrated via simulation of an airborne X-band system.

  5. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  6. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  7. AO-OCT for in vivo mouse retinal imaging: Application of adaptive lens in wavefornt sensorless aberration correction

    NASA Astrophysics Data System (ADS)

    Bonora, Stefano; Jian, Yifan; Pugh, Edward N.; Sarunic, Marinko V.; Zawadzki, Robert J.

    2014-03-01

    We demonstrate Adaptive optics - Optical Coherence Tomography (OCT) with modal sensorless Adaptive Optics correction with the use of novel Adaptive Lens (AL) applied for in-vivo imaging of mouse retinas. The AL can generate low order aberrations: defocus, astigmatism, coma and spherical aberration that were used in an adaptive search algorithm. Accelerated processing of the OCT data with a Graphic Processing Unit (GPU) permitted real time extraction of image projection total intensity for arbitrarily selected retinal depth plane to be optimized. Wavefront sensorless control is a viable option for imaging biological structures for which AOOCT cannot establish a reliable wavefront that could be corrected by wavefront corrector. Image quality improvements offered by adaptive lens with sensorless AO-OCT was evaluated on in vitro samples followed by mouse retina data acquired in vivo.

  8. Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System

    PubMed Central

    Hosseini, Monireh Sheikh; Zekri, Maryam

    2012-01-01

    Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054

  9. Adaptive optics with pupil tracking for high resolution retinal imaging

    PubMed Central

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-01-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577

  10. Digital adaptive optics line-scanning confocal imaging system.

    PubMed

    Liu, Changgeng; Kim, Myung K

    2015-01-01

    A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea. PMID:26140334

  11. Digital adaptive optics line-scanning confocal imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Changgeng; Kim, Myung K.

    2015-11-01

    A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack-Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea.

  12. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  13. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  14. Multispectral image processing: the nature factor

    NASA Astrophysics Data System (ADS)

    Watkins, Wendell R.

    1998-09-01

    The images processed by our brain represent our window into the world. For some animals this window is derived from a single eye, for others, including humans, two eyes provide stereo imagery, for others like the black widow spider several eyes are used (8 eyes), and some insects like the common housefly utilize thousands of eyes (ommatidia). Still other animals like the bat and dolphin have eyes for regular vision, but employ acoustic sonar vision for seeing where their regular eyes don't work such as in pitch black caves or turbid water. Of course, other animals have adapted to dark environments by bringing along their own lighting such as the firefly and several creates from the depths of the ocean floor. Animal vision is truly varied and has developed over millennia in many remarkable ways. We have learned a lot about vision processes by studying these animal systems and can still learn even more.

  15. Vehicle positioning using image processing

    NASA Astrophysics Data System (ADS)

    Kaur, Amardeep; Watkins, Steve E.; Swift, Theresa M.

    2009-03-01

    An image-processing approach is described that detects the position of a vehicle on a bridge. A load-bearing vehicle must be carefully positioned on a bridge for quantitative bridge monitoring. The personnel required for setup and testing and the time required for bridge closure or traffic control are important management and cost considerations. Consequently, bridge monitoring and inspections are good candidates for smart embedded systems. The objectives of this work are to reduce the need for personnel time and to minimize the time for bridge closure. An approach is proposed that uses a passive target on the bridge and camera instrumentation on the load vehicle. The orientation of the vehicle-mounted camera and the target determine the position. The experiment used pre-defined concentric circles as the target, a FireWire camera for image capture, and MATLAB for computer processing. Various image-processing techniques are compared for determining the orientation of the target circles with respect to speed and accuracy in the positioning application. The techniques for determining the target orientation use algorithms based on using the centroid feature, template matching, color feature, and Hough transforms. Timing parameters are determined for each algorithm to determine the feasibility for real-time use in a position triggering system. Also, the effect of variations in the size and color of the circles are examined. The development can be combined with embedded sensors and sensor nodes for a complete automated procedure. As the load vehicle moves to the proper position, the image-based system can trigger an embedded measurement, which is then transmitted back to the vehicle control computer through a wireless link.

  16. Adaptive Processes in Thalamus and Cortex Revealed by Silencing of Primary Visual Cortex during Contrast Adaptation.

    PubMed

    King, Jillian L; Lowe, Matthew P; Stover, Kurt R; Wong, Aimee A; Crowder, Nathan A

    2016-05-23

    Visual adaptation illusions indicate that our perception is influenced not only by the current stimulus but also by what we have seen in the recent past. Adaptation to stimulus contrast (the relative luminance created by edges or contours in a scene) induces the perception of the stimulus fading away and increases the contrast detection threshold in psychophysical tests [1, 2]. Neural correlates of contrast adaptation have been described throughout the visual system including the retina [3], dorsal lateral geniculate nucleus (dLGN) [4, 5], primary visual cortex (V1) [6], and parietal cortex [7]. The apparent ubiquity of adaptation at all stages raises the question of how this process cascades across brain regions [8]. Focusing on V1, adaptation could be inherited from pre-cortical stages, arise from synaptic depression at the thalamo-cortical synapse [9], or develop locally, but what is the weighting of these contributions? Because contrast adaptation in mouse V1 is similar to classical animal models [10, 11], we took advantage of the optogenetic tools available in mice to disentangle the processes contributing to adaptation in V1. We disrupted cortical adaptation by optogenetically silencing V1 and found that adaptation measured in V1 now resembled that observed in dLGN. Thus, the majority of adaptation seen in V1 neurons arises through local activity-dependent processes, with smaller contributions from dLGN inheritance and synaptic depression at the thalamo-cortical synapse. Furthermore, modeling indicates that divisive scaling of the weakly adapted dLGN input can predict some of the emerging features of V1 adaptation. PMID:27112300

  17. Imaging of retinal vasculature using adaptive optics SLO/OCT

    PubMed Central

    Felberer, Franz; Rechenmacher, Matthias; Haindl, Richard; Baumann, Bernhard; Hitzenberger, Christoph K.; Pircher, Michael

    2015-01-01

    We use our previously developed adaptive optics (AO) scanning laser ophthalmoscope (SLO)/ optical coherence tomography (OCT) instrument to investigate its capability for imaging retinal vasculature. The system records SLO and OCT images simultaneously with a pixel to pixel correspondence which allows a direct comparison between those imaging modalities. Different field of views ranging from 0.8°x0.8° up to 4°x4° are supported by the instrument. In addition a dynamic focus scheme was developed for the AO-SLO/OCT system in order to maintain the high transverse resolution throughout imaging depth. The active axial eye tracking that is implemented in the OCT channel allows time resolved measurements of the retinal vasculature in the en-face imaging plane. Vessel walls and structures that we believe correspond to individual erythrocytes could be visualized with the system. PMID:25909024

  18. A novel spatially adaptive guide-filter total variation (SAGFTV) regularization for image restoration

    NASA Astrophysics Data System (ADS)

    Fang, Hao; Li, Qian; Huang, Zhenghua

    2015-12-01

    Denoising algorithms based on gradient dependent energy functionals, such as Perona-Malik, total variation and adaptive total variation denoising, modify images towards piecewise constant functions. Although edge sharpness and location is well preserved, important information, encoded in image features like textures or certain details, is often compromised in the process of denoising. In this paper, We propose a novel Spatially Adaptive Guide-Filtering Total Variation (SAGFTV) regularization with image restoration algorithm for denoising images. The guide-filter is extended to the variational formulations of imaging problem, and the spatially adaptive operator can easily distinguish flat areas from texture areas. Our simulating experiments show the improvement of peak signal noise ratio (PSNR), root mean square error (RMSE) and structure similarity increment measurement (SSIM) over other prior algorithms. The results of both simulating and practical experiments are more appealing visually. This type of processing can be used for a variety of tasks in PDE-based image processing and computer vision, and is stable and meaningful from a mathematical viewpoint.

  19. Image processing photosensor for robots

    NASA Astrophysics Data System (ADS)

    Vinogradov, Sergey L.; Shubin, Vitaly E.

    1995-01-01

    Some aspects of the possible applications of new, nontraditional generation of the advanced photosensors having the inherent internal image processing for multifunctional optoelectronic systems such as machine vision systems (MVS) are discussed. The optical information in these solid-state photosensors, so-called photoelectric structures with memory (PESM), is registered and stored in the form of 2D charge and potential patterns in the plane of the layers, and then it may be transferred and transformed in a normal direction due to interaction of these patterns. PESM ensure high operation potential of the massively parallel processing with effective rate up to 1014 operation/bit/s in such integral operations as addition, subtraction, contouring, correlation of images and so on. Most diverse devices and apparatus may be developed on their base, ranging from automatic rangefinders to the MVS for furnishing robotized industries. Principal features, physical backgrounds of the main primary operations, complex functional algorithms for object selection, tracking, and guidance are briefly described. The examples of the possible application of the PESM as an intellectual 'supervideosensor', that combines a high-quality imager, memory media and a high-capacity special-purpose processor will be presented.

  20. Adaptive Constructive Processes and the Future of Memory

    ERIC Educational Resources Information Center

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…

  1. Adaptation to Work: An Exploration of Processes and Outcomes.

    ERIC Educational Resources Information Center

    Ashley, William L.; And Others

    A study of adaptation to work as both a process and an outcome was conducted. The study was conducted by personal interview that probed adaptation with respect to work's organizational, performance, interpersonal, responsibility, and affective aspects; and by questionnaire using the same aspects. The population studied consisted of persons without…

  2. Techniques for radar imaging using a wideband adaptive array

    NASA Astrophysics Data System (ADS)

    Curry, Mark Andrew

    A microwave imaging approach is simulated and validated experimentally that uses a small, wideband adaptive array. The experimental 12-element linear array and microwave receiver uses stepped frequency CW signals from 2--3 GHz and receives backscattered energy from short range objects in a +/-90° field of view. Discone antenna elements are used due to their wide temporal bandwidth, isotropic azimuth beam pattern and fixed phase center. It is also shown that these antennas have very low mutual coupling, which significantly reduces the calibration requirements. The MUSIC spectrum is used as a calibration tool. Spatial resampling is used to correct the dispersion effects, which if not compensated causes severe reduction in detection and resolution for medium and large off-axis angles. Fourier processing provides range resolution and the minimum variance spectral estimate is employed to resolve constant range targets for improved angular resolution. Spatial smoothing techniques are used to generate signal plus interference covariance matrices at each range bin. Clutter affects the angular resolution of the array due to the increase in rank of the signal plus clutter covariance matrix, whereas at the same time the rank of this matrix is reduced for closely spaced scatterers due to signal coherence. A method is proposed to enhance angular resolution in the presence of clutter by an approximate signal subspace projection (ASSP) that maps the received signal space to a lower effective rank approximation. This projection operator has a scalar control parameter that is a function of the signal and clutter amplitude estimates. These operations are accomplished without using eigendecomposition. The low sidelobe levels allow the imaging of the integrated backscattering from the absorber cones in the chamber. This creates a fairly large clutter signature for testing ASSP. We can easily resolve 2 dihedrals placed at about 70% of a beamwidth apart, with a signal to clutter ratio

  3. Raster image adaptation for mobile devices using profiles

    NASA Astrophysics Data System (ADS)

    Rosenbaum, René; Hamann, Bernd

    2012-02-01

    Focusing on digital imagery, this paper introduces a strategy to handle heterogeneous hardware in mobile environments. Constrained system resources of most mobile viewing devices require contents that are tailored to the requirements of the user and the capabilities of the device. Appropriate image adaptation is still an unsolved research question. Due to the complexity of the problem, available solutions are either too resource-intensive or inflexible to be more generally applicable. The proposed approach is based on scalable image compression and progressive refinement as well as data and user profiles. A scalable image is created once and used multiple times for different kinds of devices and user requirements. Profiles available on the server side allow for an image representation that is adapted to the most important resources in mobile computing: screen space, computing power, and the volume of the transmitted data. Options for progressively refining content thereby allow for a fluent viewing experience during adaptation. Due to its flexibility and low complexity, the proposed solution is much more general compared to related approaches. To document the advantages of our approach we provide empirical results obtained in experiments with an implementation of the method.

  4. Adapting smartphones for low-cost optical medical imaging

    NASA Astrophysics Data System (ADS)

    Pratavieira, Sebastião.; Vollet-Filho, José D.; Carbinatto, Fernanda M.; Blanco, Kate; Inada, Natalia M.; Bagnato, Vanderlei S.; Kurachi, Cristina

    2015-06-01

    Optical images have been used in several medical situations to improve diagnosis of lesions or to monitor treatments. However, most systems employ expensive scientific (CCD or CMOS) cameras and need computers to display and save the images, usually resulting in a high final cost for the system. Additionally, this sort of apparatus operation usually becomes more complex, requiring more and more specialized technical knowledge from the operator. Currently, the number of people using smartphone-like devices with built-in high quality cameras is increasing, which might allow using such devices as an efficient, lower cost, portable imaging system for medical applications. Thus, we aim to develop methods of adaptation of those devices to optical medical imaging techniques, such as fluorescence. Particularly, smartphones covers were adapted to connect a smartphone-like device to widefield fluorescence imaging systems. These systems were used to detect lesions in different tissues, such as cervix and mouth/throat mucosa, and to monitor ALA-induced protoporphyrin-IX formation for photodynamic treatment of Cervical Intraepithelial Neoplasia. This approach may contribute significantly to low-cost, portable and simple clinical optical imaging collection.

  5. Incorporating Adaptive Local Information Into Fuzzy Clustering for Image Segmentation.

    PubMed

    Liu, Guoying; Zhang, Yun; Wang, Aimin

    2015-11-01

    Fuzzy c-means (FCM) clustering with spatial constraints has attracted great attention in the field of image segmentation. However, most of the popular techniques fail to resolve misclassification problems due to the inaccuracy of their spatial models. This paper presents a new unsupervised FCM-based image segmentation method by paying closer attention to the selection of local information. In this method, region-level local information is incorporated into the fuzzy clustering procedure to adaptively control the range and strength of interactive pixels. First, a novel dissimilarity function is established by combining region-based and pixel-based distance functions together, in order to enhance the relationship between pixels which have similar local characteristics. Second, a novel prior probability function is developed by integrating the differences between neighboring regions into the mean template of the fuzzy membership function, which adaptively selects local spatial constraints by a tradeoff weight depending upon whether a pixel belongs to a homogeneous region or not. Through incorporating region-based information into the spatial constraints, the proposed method strengthens the interactions between pixels within the same region and prevents over smoothing across region boundaries. Experimental results over synthetic noise images, natural color images, and synthetic aperture radar images show that the proposed method achieves more accurate segmentation results, compared with five state-of-the-art image segmentation methods. PMID:26186787

  6. Bayer patterned high dynamic range image reconstruction using adaptive weighting function

    NASA Astrophysics Data System (ADS)

    Kang, Hee; Lee, Suk Ho; Song, Ki Sun; Kang, Moon Gi

    2014-12-01

    It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.

  7. Adaptive Noise Suppression Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Kozel, David; Nelson, Richard

    1996-01-01

    A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.

  8. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  9. Adaptive filtering for reduction of speckle in ultrasonic pulse-echo images.

    PubMed

    Bamber, J C; Daft, C

    1986-01-01

    Current medical ultrasonic scanning instrumentation permits the display of fine image detail (speckle) which does not transfer useful information but degrades the apparent low contrast resolution in the image. An adaptive two-dimensional filter has been developed which uses local features of image texture to recognize and maximally low-pass filter those parts of the image which correspond to fully developed speckle, while substantially preserving information associated with resolved-object structure. A first implementation of the filter is described which uses the ratio of the local variance and the local mean as the speckle recognition feature. Preliminary results of applying this form of display processing to medical ultrasound images are very encouraging; it appears that the visual perception of features such as small discrete structures, subtle fluctuations in mean echo level and changes in image texture may be enhanced relative to that for unprocessed images. PMID:3510500

  10. Adaptive Memory: Is Survival Processing Special?

    ERIC Educational Resources Information Center

    Nairne, James S.; Pandeirada, Josefa N. S.

    2008-01-01

    Do the operating characteristics of memory continue to bear the imprints of ancestral selection pressures? Previous work in our laboratory has shown that human memory may be specially tuned to retain information processed in terms of its survival relevance. A few seconds of survival processing in an incidental learning context can produce recall…

  11. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  12. Design of adaptive objective lens for ultrabroad near infrared imaging

    NASA Astrophysics Data System (ADS)

    Lan, Gongpu; Li, Guoqiang

    2016-03-01

    We present a compound adaptive objective lens in which a water-filled membrane lens is inserted into a front group (one lens) and a back group (two lenses). This adaptive objective lens works in the ultrabroad near infrared waveband (760nm ~ 920nm) with the volume scan of > 1mm3 and the resolution of 2.8 μm (calculated at the wavelength of 840 nm). The focal range is 19.5mm ~ 20.5mm and the numerical number is 0.196. The size of the adaptive lens is 10mm (diameter) × 17mm (length). This kind of lens can be widely used in three-dimensional (3D) volume biomedical imaging instruments, such as confocal microscope, optical coherence tomography (OCT), two photon microscope, etc.

  13. Image processing technique for arbitrary image positioning in holographic stereogram

    NASA Astrophysics Data System (ADS)

    Kang, Der-Kuan; Yamaguchi, Masahiro; Honda, Toshio; Ohyama, Nagaaki

    1990-12-01

    In a one-step holographic stereogram, if the series of original images are used just as they are taken from perspective views, three-dimensional images are usually reconstructed in back of the hologram plane. In order to enhance the sense of perspective of the reconstructed images and minimize blur of the interesting portions, we introduce an image processing technique for making a one-step flat format holographic stereogram in which three-dimensional images can be observed at an arbitrary specified position. Experimental results show the effect of the image processing. Further, we show results of a medical application using this image processing.

  14. Multiwavelength adaptive optical fundus camera and continuous retinal imaging

    NASA Astrophysics Data System (ADS)

    Yang, Han-sheng; Li, Min; Dai, Yun; Zhang, Yu-dong

    2009-08-01

    We have constructed a new version of retinal imaging system with chromatic aberration concerned and the correlated optical design presented in this article is based on the adaptive optics fundus camera modality. In our system, three typical wavelengths of 550nm, 650nm and 480nm were selected. Longitude chromatic aberration (LCA) was traded off to a minimum using ZEMAX program. The whole setup was actually evaluated on human subjects and retinal imaging was performed at continuous frame rates up to 20 Hz. Raw videos at parafovea locations were collected, and cone mosaics as well as retinal vasculature were clearly observed in one single clip. In addition, comparisons under different illumination conditions were also made to confirm our design. Image contrast and the Strehl ratio were effectively increased after dynamic correction of high order aberrations. This system is expected to bring new applications in functional imaging of human retina.

  15. A modified Richardson-Lucy algorithm for single image with adaptive reference maps

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2014-06-01

    In this paper, we propose a modified non-blind Richardson-Lucy algorithm using adaptive reference maps as local constraint to reduce noise and ringing artifacts effectively. The deconvolution process can be divided into two stages. In the first deblurring stage, the reference map is estimated from the blurred image and an intermediate deblurred result is obtained. And then the adaptive reference map is updated according to both the blurred image and the deblurred result of the first stage to produce a more accurate edge description, which is very helpful to suppress the ringing around edges. Gaussian image prior is adopted as the regularization to improve the standard Richardson-Lucy algorithm. Experimental results show that the presented approach could suppress the negative ringing artifacts effectively as well as preserve the edge information, even if the blurred image contains rich textures.

  16. Adaptive noise Wiener filter for scanning electron microscope imaging system.

    PubMed

    Sim, K S; Teh, V; Nia, M E

    2016-01-01

    Noise on scanning electron microscope (SEM) images is studied. Gaussian noise is the most common type of noise in SEM image. We developed a new noise reduction filter based on the Wiener filter. We compared the performance of this new filter namely adaptive noise Wiener (ANW) filter, with four common existing filters as well as average filter, median filter, Gaussian smoothing filter and the Wiener filter. Based on the experiments results the proposed new filter has better performance on different noise variance comparing to the other existing noise removal filters in the experiments. PMID:26235517

  17. Lidar imaging with on-the-fly adaptable spatial resolution

    NASA Astrophysics Data System (ADS)

    Riu, J.; Royo, S.

    2013-10-01

    We present our work in the design and construction of a novel type of lidar device capable of measuring 3D range images with an spatial resolution which can be reconfigured through an on-the-fly configuration approach, adjustable by software and on the image area, and which can reach the 2Mpixel value. A double-patented novel concept of scanning system enables to change dynamically the image resolution depending on external information provided by the image captured in a previous cycle or on other sensors like greyscale or hyperspectral 2D imagers. A prototype of an imaging lidar system which can modify its spatial resolution on demand from one image to the next according to the target nature and state has been developed, and indoor and outdoor sample images showing its performance are presented. Applications in object detection, tracking and identification through a real-time adaptable scanning system for each situation and target behaviour are currently being pursued in different areas.

  18. Adaptive sigmoid function bihistogram equalization for image contrast enhancement

    NASA Astrophysics Data System (ADS)

    Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe

    2015-09-01

    Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.

  19. Color Enhancement in Endoscopic Images Using Adaptive Sigmoid Function and Space Variant Color Reproduction.

    PubMed

    Imtiaz, Mohammad S; Wahid, Khan A

    2015-01-01

    Modern endoscopes play an important role in diagnosing various gastrointestinal (GI) tract related diseases. The improved visual quality of endoscopic images can provide better diagnosis. This paper presents an efficient color image enhancement method for endoscopic images. It is achieved in two stages: image enhancement at gray level followed by space variant chrominance mapping color reproduction. Image enhancement is achieved by performing adaptive sigmoid function and uniform distribution of sigmoid pixels. Secondly, a space variant chrominance mapping color reproduction is used to generate new chrominance components. The proposed method is used on low contrast color white light images (WLI) to enhance and highlight the vascular and mucosa structures of the GI tract. The method is also used to colorize grayscale narrow band images (NBI) and video frames. The focus value and color enhancement factor show that the enhancement level in the processed image is greatly increased compared to the original endoscopic image. The overall contrast level of the processed image is higher than the original image. The color similarity test has proved that the proposed method does not add any additional color which is not present in the original image. The algorithm has low complexity with an execution speed faster than other related methods. PMID:26089969

  20. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography.

    PubMed

    Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V

    2015-02-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation. PMID:25780747

  1. Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2011-06-01

    With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.

  2. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography

    PubMed Central

    Wong, Kevin S. K.; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J.; Sarunic, Marinko V.

    2015-01-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation. PMID:25780747

  3. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  4. Sensory Processing Subtypes in Autism: Association with Adaptive Behavior

    ERIC Educational Resources Information Center

    Lane, Alison E.; Young, Robyn L.; Baker, Amy E. Z.; Angley, Manya T.

    2010-01-01

    Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes…

  5. Integration of AdaptiSPECT, a small-animal adaptive SPECT imaging system

    PubMed Central

    Chaix, Cécile; Kovalsky, Stephen; Kosmider, Matthew; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    AdaptiSPECT is a pre-clinical adaptive SPECT imaging system under final development at the Center for Gamma-ray Imaging. The system incorporates multiple adaptive features: an adaptive aperture, 16 detectors mounted on translational stages, and the ability to switch between a non-multiplexed and a multiplexed imaging configuration. In this paper, we review the design of AdaptiSPECT and its adaptive features. We then describe the on-going integration of the imaging system. PMID:26347197

  6. Metabolic Adaptation Processes That Converge to Optimal Biomass Flux Distributions

    PubMed Central

    Altafini, Claudio; Facchetti, Giuseppe

    2015-01-01

    In simple organisms like E.coli, the metabolic response to an external perturbation passes through a transient phase in which the activation of a number of latent pathways can guarantee survival at the expenses of growth. Growth is gradually recovered as the organism adapts to the new condition. This adaptation can be modeled as a process of repeated metabolic adjustments obtained through the resilencings of the non-essential metabolic reactions, using growth rate as selection probability for the phenotypes obtained. The resulting metabolic adaptation process tends naturally to steer the metabolic fluxes towards high growth phenotypes. Quite remarkably, when applied to the central carbon metabolism of E.coli, it follows that nearly all flux distributions converge to the flux vector representing optimal growth, i.e., the solution of the biomass optimization problem turns out to be the dominant attractor of the metabolic adaptation process. PMID:26340476

  7. Compressive adaptive ghost imaging via sharing mechanism and fellow relationship.

    PubMed

    Huo, Yaoran; He, Hongjie; Chen, Fan

    2016-04-20

    For lower sampling rate and better imaging quality, a compressive adaptive ghost imaging is proposed by adopting the sharing mechanism and fellow relationship in the wavelet tree. The sharing mechanisms, including intrascale and interscale sharing mechanisms, and fellow relationship are excavated from the wavelet tree and utilized for sampling. The shared coefficients, which are part of the approximation subband, are localized according to the parent coefficients and sampled based on the interscale sharing mechanism and fellow relationship. The sampling rate can be reduced owing to the fact that some shared coefficients can be calculated by adopting the parent coefficients and the sampled sum of shared coefficients. According to the shared coefficients and parent coefficients, the proposed method predicts the positions of significant coefficients and samples them based on the intrascale sharing mechanism. The ghost image, reconstructed by the significant coefficients and the coarse image at the given largest scale, achieves better quality because the significant coefficients contain more detailed information. The simulations demonstrate that the proposed method improves the imaging quality at the same sampling rate and also achieves a lower sampling rate for the same imaging quality for different types of target object images in noise-free and noisy environments. PMID:27140111

  8. Multimodal Medical Image Fusion by Adaptive Manifold Filter.

    PubMed

    Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images. PMID:26664494

  9. Adaptive SPECT imaging with crossed-slit apertures

    PubMed Central

    Durko, Heather L.; Furenlid, Lars R.

    2015-01-01

    Preclinical single-photon emission computed tomography (SPECT) is an essential tool for studying the progression, response to treatment, and physiological changes in small animal models of human disease. The wide range of imaging applications is often limited by the static design of many preclinical SPECT systems. We have developed a prototype imaging system that replaces the standard static pinhole aperture with two sets of movable, keel-edged copper-tungsten blades configured as crossed (skewed) slits. These apertures can be positioned independently between the object and detector, producing a continuum of imaging configurations in which the axial and transaxial magnifications are not constrained to be equal. We incorporated a megapixel silicon double-sided strip detector to permit ultrahigh-resolution imaging. We describe the configuration of the adjustable slit aperture imaging system and discuss its application toward adaptive imaging, and reconstruction techniques using an accurate imaging forward model, a novel geometric calibration technique, and a GPU-based ultra-high-resolution reconstruction code. PMID:26190884

  10. Natural language processing and visualization in the molecular imaging domain.

    PubMed

    Tulipano, P Karina; Tao, Ying; Millar, William S; Zanzonico, Pat; Kolbert, Katherine; Xu, Hua; Yu, Hong; Chen, Lifeng; Lussier, Yves A; Friedman, Carol

    2007-06-01

    Molecular imaging is at the crossroads of genomic sciences and medical imaging. Information within the molecular imaging literature could be used to link to genomic and imaging information resources and to organize and index images in a way that is potentially useful to researchers. A number of natural language processing (NLP) systems are available to automatically extract information from genomic literature. One existing NLP system, known as BioMedLEE, automatically extracts biological information consisting of biomolecular substances and phenotypic data. This paper focuses on the adaptation, evaluation, and application of BioMedLEE to the molecular imaging domain. In order to adapt BioMedLEE for this domain, we extend an existing molecular imaging terminology and incorporate it into BioMedLEE. BioMedLEE's performance is assessed with a formal evaluation study. The system's performance, measured as recall and precision, is 0.74 (95% CI: [.70-.76]) and 0.70 (95% CI [.63-.76]), respectively. We adapt a JAVA viewer known as PGviewer for the simultaneous visualization of images with NLP extracted information. PMID:17084109

  11. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  12. The iterative adaptive approach in medical ultrasound imaging.

    PubMed

    Jensen, Are Charles; Austeng, Andreas

    2014-10-01

    Many medical ultrasound imaging systems are based on sweeping the image plane with a set of narrow beams. Usually, the returning echo from each of these beams is used to form one or a few azimuthal image samples. We model, for each radial distance, jointly the full azimuthal scanline. The model consists of the amplitudes of a set of densely placed potential reflectors (or scatterers), cf. sparse signal representation. To fit the model, we apply the iterative adaptive approach (IAA) on data formed by a sequenced time delay and phase shift. The performance of the IAA in combination with our time-delayed and phase-shifted data are studied on both simulated data of scenes consisting of point targets and hollow cyst-like structures, and recorded ultrasound phantom data from a specially adapted commercially available scanner. The results show that the proposed IAA is more capable of resolving point targets and gives better defined and more geometrically correct cyst-like structures in speckle images compared with the conventional delay-and-sum (DAS) approach. Compared with a Capon beamformer, the IAA showed an improved rendering of cyst-like structures and a similar point-target resolvability. Unlike the Capon beamformer, the IAA has no user parameters and seems unaffected by signal cancellation. The disadvantage of the IAA is a high computational load. PMID:25265177

  13. Bilateral filtering and adaptive tone-mapping for qualified edge and image enhancement

    NASA Astrophysics Data System (ADS)

    Hu, Kuo-Jui; Chang, Ting-Ting; Lu, Min-Yao; Li, Wu-Jeng; Huang, Jih-Fon

    2009-01-01

    Most of high-contrast images are common with dark and bright area. It is difficult to present the detail on both dark and high light areas on display devices. In order to resolve this problem, we proposed a method of image enhancement to improve this image quality and used bilateral filter to keep the detail. In paper, we applied an appropriate algorithm to process images. At first, we use bilateral filter to separate image. One is large scale image and the other is detail image. Second, we made large scale image which was translated into histogram. In order to make the images divided into three stairs, such as lightness, middle-tone and darkness region. We decided two optimal threshold parameters. Finally, according to three images we use different tone-mapping method to process each stair. The tone-mapping method includes adaptive s-curve and gamma curve algorithms. The experiment results of this study revealed image detail and enhancement. To avoid contour phenomenon is in lightness region.

  14. Fast Source Camera Identification Using Content Adaptive Guided Image Filter.

    PubMed

    Zeng, Hui; Kang, Xiangui

    2016-03-01

    Source camera identification (SCI) is an important topic in image forensics. One of the most effective fingerprints for linking an image to its source camera is the sensor pattern noise, which is estimated as the difference between the content and its denoised version. It is widely believed that the performance of the sensor-based SCI heavily relies on the denoising filter used. This study proposes a novel sensor-based SCI method using content adaptive guided image filter (CAGIF). Thanks to the low complexity nature of the CAGIF, the proposed method is much faster than the state-of-the-art methods, which is a big advantage considering the potential real-time application of SCI. Despite the advantage of speed, experimental results also show that the proposed method can achieve comparable or better performance than the state-of-the-art methods in terms of accuracy. PMID:27404627

  15. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  16. Dense and accurate motion and strain estimation in high resolution speckle images using an image-adaptive approach

    NASA Astrophysics Data System (ADS)

    Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim

    2011-09-01

    Digital image processing methods represent a viable and well acknowledged alternative to strain gauges and interferometric techniques for determining full-field displacements and strains in materials under stress. This paper presents an image adaptive technique for dense motion and strain estimation using high-resolution speckle images that show the analyzed material in its original and deformed states. The algorithm starts by dividing the speckle image showing the original state into irregular cells taking into consideration both spatial and gradient image information present. Subsequently the Newton-Raphson digital image correlation technique is applied to calculate the corresponding motion for each cell. Adaptive spatial regularization in the form of the Geman- McClure robust spatial estimator is employed to increase the spatial consistency of the motion components of a cell with respect to the components of neighbouring cells. To obtain the final strain information, local least-squares fitting using a linear displacement model is performed on the horizontal and vertical displacement fields. To evaluate the presented image partitioning and strain estimation techniques two numerical and two real experiments are employed. The numerical experiments simulate the deformation of a specimen with constant strain across the surface as well as small rigid-body rotations present while real experiments consist specimens that undergo uniaxial stress. The results indicate very good accuracy of the recovered strains as well as better rotation insensitivity compared to classical techniques.

  17. Assessment of vessel diameters for MR brain angiography processed images

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian-Dragos; Moldovanu, Simona

    2015-12-01

    The motivation was to develop an assessment method to measure (in)visible differences between the original and the processed images in MR brain angiography as a method of evaluation of the status of the vessel segments (i.e. the existence of the occlusion or intracerebral vessels damaged as aneurysms). Generally, the image quality is limited, so we improve the performance of the evaluation through digital image processing. The goal is to determine the best processing method that allows an accurate assessment of patients with cerebrovascular diseases. A total of 10 MR brain angiography images were processed by the following techniques: histogram equalization, Wiener filter, linear contrast adjustment, contrastlimited adaptive histogram equalization, bias correction and Marr-Hildreth filter. Each original image and their processed images were analyzed into the stacking procedure so that the same vessel and its corresponding diameter have been measured. Original and processed images were evaluated by measuring the vessel diameter (in pixels) on an established direction and for the precise anatomic location. The vessel diameter is calculated using the plugin ImageJ. Mean diameter measurements differ significantly across the same segment and for different processing techniques. The best results are provided by the Wiener filter and linear contrast adjustment methods and the worst by Marr-Hildreth filter.

  18. Efficient generation of discontinuity-preserving adaptive triangulations from range images.

    PubMed

    Garcia, Miguel Angel; Sappa, Angel Domingo

    2004-10-01

    This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times. PMID:15503496

  19. Adaptive geodesic transform for segmentation of vertebrae on CT images

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  20. An adaptive PCA fusion method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Guo, Qing; Li, An; Zhang, Hongqun; Feng, Zhongkui

    2014-10-01

    The principal component analysis (PCA) method is a popular fusion method used for its efficiency and high spatial resolution improvement. However, the spectral distortion is often found in PCA. In this paper, we propose an adaptive PCA method to enhance the spectral quality of the fused image. The amount of spatial details of the panchromatic (PAN) image injected into each band of the multi-spectral (MS) image is appropriately determined by a weighting matrix, which is defined by the edges of the PAN image, the edges of the MS image and the proportions between MS bands. In order to prove the effectiveness of the proposed method, the qualitative visual and quantitative analyses are introduced. The correlation coefficient (CC), the spectral discrepancy (SPD), and the spectral angle mapper (SAM) are used to measure the spectral quality of each fused band image. Q index is calculated to evaluate the global spectral quality of all the fused bands as a whole. The spatial quality is evaluated by the average gradient (AG) and the standard deviation (STD). Experimental results show that the proposed method improves the spectral quality very much comparing to the original PCA method while maintaining the high spatial quality of the original PCA.

  1. Adaptive coded aperture imaging: progress and potential future applications

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.

    2011-09-01

    Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).

  2. Breast image feature learning with adaptive deconvolutional networks

    NASA Astrophysics Data System (ADS)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  3. An adaptive-optics scanning laser ophthalmoscope for imaging murine retinal microstructure

    NASA Astrophysics Data System (ADS)

    Alt, Clemens; Biss, David P.; Tajouri, Nadja; Jakobs, Tatjana C.; Lin, Charles P.

    2010-02-01

    In vivo retinal imaging is an outstanding tool to observe biological processes unfold in real-time. The ability to image microstructure in vivo can greatly enhance our understanding of function in retinal microanatomy under normal conditions and in disease. Transgenic mice are frequently used for mouse models of retinal diseases. However, commercially available retinal imaging instruments lack the optical resolution and spectral flexibility necessary to visualize detail comprehensively. We developed an adaptive optics scanning laser ophthalmoscope (AO-SLO) specifically for mouse eyes. Our SLO is a sensor-less adaptive optics system (no Shack Hartmann sensor) that employs a stochastic parallel gradient descent algorithm to modulate a deformable mirror, ultimately aiming to correct wavefront aberrations by optimizing confocal image sharpness. The resulting resolution allows detailed observation of retinal microstructure. The AO-SLO can resolve retinal microglia and their moving processes, demonstrating that microglia processes are highly motile, constantly probing their immediate environment. Similarly, retinal ganglion cells are imaged along with their axons and sprouting dendrites. Retinal blood vessels are imaged both using evans blue fluorescence and backscattering contrast.

  4. Adaptive constructive processes and the future of memory

    PubMed Central

    Schacter, Daniel L.

    2013-01-01

    Memory serves critical functions in everyday life, but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, or illusions. The article describes several types of memory errors that are produced by adaptive constructive processes, and focuses in particular on the process of imagining or simulating events that might occur in one’s personal future. Simulating future events relies on many of the same cognitive and neural processes as remembering past events, which may help to explain why imagination and memory can be easily confused. The article considers both pitfalls and adaptive aspects of future event simulation in the context of research on planning, prediction, problem solving, mind-wandering, prospective and retrospective memory, coping and positivity bias, and the interconnected set of brain regions known as the default network. PMID:23163437

  5. Cluster-based parallel image processing toolkit

    NASA Astrophysics Data System (ADS)

    Squyres, Jeffery M.; Lumsdaine, Andrew; Stevenson, Robert L.

    1995-03-01

    Many image processing tasks exhibit a high degree of data locality and parallelism and map quite readily to specialized massively parallel computing hardware. However, as network technologies continue to mature, workstation clusters are becoming a viable and economical parallel computing resource, so it is important to understand how to use these environments for parallel image processing as well. In this paper we discuss our implementation of parallel image processing software library (the Parallel Image Processing Toolkit). The Toolkit uses a message- passing model of parallelism designed around the Message Passing Interface (MPI) standard. Experimental results are presented to demonstrate the parallel speedup obtained with the Parallel Image Processing Toolkit in a typical workstation cluster over a wide variety of image processing tasks. We also discuss load balancing and the potential for parallelizing portions of image processing tasks that seem to be inherently sequential, such as visualization and data I/O.

  6. Adaptive and Background-Aware GAL4 Expression Enhancement of Co-registered Confocal Microscopy Images.

    PubMed

    Trapp, Martin; Schulze, Florian; Novikov, Alexey A; Tirian, Laszlo; J Dickson, Barry; Bühler, Katja

    2016-04-01

    GAL4 gene expression imaging using confocal microscopy is a common and powerful technique used to study the nervous system of a model organism such as Drosophila melanogaster. Recent research projects focused on high throughput screenings of thousands of different driver lines, resulting in large image databases. The amount of data generated makes manual assessment tedious or even impossible. The first and most important step in any automatic image processing and data extraction pipeline is to enhance areas with relevant signal. However, data acquired via high throughput imaging tends to be less then ideal for this task, often showing high amounts of background signal. Furthermore, neuronal structures and in particular thin and elongated projections with a weak staining signal are easily lost. In this paper we present a method for enhancing the relevant signal by utilizing a Hessian-based filter to augment thin and weak tube-like structures in the image. To get optimal results, we present a novel adaptive background-aware enhancement filter parametrized with the local background intensity, which is estimated based on a common background model. We also integrate recent research on adaptive image enhancement into our approach, allowing us to propose an effective solution for known problems present in confocal microscopy images. We provide an evaluation based on annotated image data and compare our results against current state-of-the-art algorithms. The results show that our algorithm clearly outperforms the existing solutions. PMID:26743993

  7. Focusing a NIR adaptive optics imager; experience with GSAOI

    NASA Astrophysics Data System (ADS)

    Doolan, Matthew; Bloxham, Gabe; Conroy, Peter; Jones, Damien; McGregor, Peter; Stevanovic, Dejan; Van Harmelen, Jan; Waldron, Liam E.; Waterson, Mark; Zhelem, Ross

    2006-06-01

    The Gemini South Adaptive Optics Imager (GSAOI) to be used with the Multi-Conjugate Adaptive Optics (MCAO) system at Gemini South is currently in the final stages of assembly and testing. GSAOI uses a suite of 26 different filters, made from both BK7 and Fused Silica substrates. These filters, located in a non-collimated beam, work as active optical elements. The optical design was undertaken to ensure that both the filter substrates both focused longitudinally at the same point. During the testing of the instrument it was found that longitudinal focus was filter dependant. The methods used to investigate this are outlined in the paper. These investigations identified several possible causes for the focal shift including substrate material properties in cryogenic conditions and small amounts of residual filter power.

  8. Classification in medical images using adaptive metric k-NN

    NASA Astrophysics Data System (ADS)

    Chen, C.; Chernoff, K.; Karemore, G.; Lo, P.; Nielsen, M.; Lauze, F.

    2010-03-01

    The performance of the k-nearest neighborhoods (k-NN) classifier is highly dependent on the distance metric used to identify the k nearest neighbors of the query points. The standard Euclidean distance is commonly used in practice. This paper investigates the performance of k-NN classifier with respect to different adaptive metrics in the context of medical imaging. We propose using adaptive metrics such that the structure of the data is better described, introducing some unsupervised learning knowledge in k-NN. We investigated four different metrics are estimated: a theoretical metric based on the assumption that images are drawn from Brownian Image Model (BIM), the normalized metric based on variance of the data, the empirical metric is based on the empirical covariance matrix of the unlabeled data, and an optimized metric obtained by minimizing the classification error. The spectral structure of the empirical covariance also leads to Principal Component Analysis (PCA) performed on it which results the subspace metrics. The metrics are evaluated on two data sets: lateral X-rays of the lumbar aortic/spine region, where we use k-NN for performing abdominal aorta calcification detection; and mammograms, where we use k-NN for breast cancer risk assessment. The results show that appropriate choice of metric can improve classification.

  9. Hybrid regularizers-based adaptive anisotropic diffusion for image denoising.

    PubMed

    Liu, Kui; Tan, Jieqing; Ai, Liefu

    2016-01-01

    To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the [Formula: see text]-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations. PMID:27047730

  10. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  11. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  12. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  13. Applications Of Image Processing In Criminalistics

    NASA Astrophysics Data System (ADS)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  14. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  15. Dynamic analysis of neural encoding by point process adaptive filtering.

    PubMed

    Eden, Uri T; Frank, Loren M; Barbieri, Riccardo; Solo, Victor; Brown, Emery N

    2004-05-01

    Neural receptive fields are dynamic in that with experience, neurons change their spiking responses to relevant stimuli. To understand how neural systems adapt their representations of biological information, analyses of receptive field plasticity from experimental measurements are crucial. Adaptive signal processing, the well-established engineering discipline for characterizing the temporal evolution of system parameters, suggests a framework for studying the plasticity of receptive fields. We use the Bayes' rule Chapman-Kolmogorov paradigm with a linear state equation and point process observation models to derive adaptive filters appropriate for estimation from neural spike trains. We derive point process filter analogues of the Kalman filter, recursive least squares, and steepest-descent algorithms and describe the properties of these new filters. We illustrate our algorithms in two simulated data examples. The first is a study of slow and rapid evolution of spatial receptive fields in hippocampal neurons. The second is an adaptive decoding study in which a signal is decoded from ensemble neural spiking activity as the receptive fields of the neurons in the ensemble evolve. Our results provide a paradigm for adaptive estimation for point process observations and suggest a practical approach for constructing filtering algorithms to track neural receptive field dynamics on a millisecond timescale. PMID:15070506

  16. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  17. Adaptive regularized scheme for remote sensing image fusion

    NASA Astrophysics Data System (ADS)

    Tang, Sizhang; Shen, Chaomin; Zhang, Guixu

    2016-06-01

    We propose an adaptive regularized algorithm for remote sensing image fusion based on variational methods. In the algorithm, we integrate the inputs using a "grey world" assumption to achieve visual uniformity. We propose a fusion operator that can automatically select the total variation (TV)-L1 term for edges and L2-terms for non-edges. To implement our algorithm, we use the steepest descent method to solve the corresponding Euler-Lagrange equation. Experimental results show that the proposed algorithm achieves remarkable results.

  18. Image processing and fusion to detect navigation obstacles

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kazuo; Yamada, Kimio

    1998-07-01

    Helicopters flying at low altitude in the visual flight rules often crash against obstacles such as a power transmission line. This paper describes the image sensors to detect obstacles and the several image processing techniques to derive and enhance the targets in the images. The images including obstacles were collected both on the ground and by air using an infrared (IR) camera and a color video camera in different backgrounds, distances, and weather conditions. Collected results revealed that IR images have an advantage over color images to detect obstacles in many environments. Several image processing techniques have been evaluated to improve the qualities of collected images. For example, fusion of IR and color images, several filters, such as the Median filter or the adaptive filter have been tested. Information that the target is thin and long, which characterizes the shape of power lines, has been introduced to derive power lines. It has been shown that these processes can greatly reduce the noise and enhance the contrast, no matter how the background is. It has also been demonstrated that there is a good prospect that these processes will help develop the algorithm for automatic obstacle detection and warning.

  19. Applying statistical process control to the adaptive rate control problem

    NASA Astrophysics Data System (ADS)

    Manohar, Nelson R.; Willebeek-LeMair, Marc H.; Prakash, Atul

    1997-12-01

    Due to the heterogeneity and shared resource nature of today's computer network environments, the end-to-end delivery of multimedia requires adaptive mechanisms to be effective. We present a framework for the adaptive streaming of heterogeneous media. We introduce the application of online statistical process control (SPC) to the problem of dynamic rate control. In SPC, the goal is to establish (and preserve) a state of statistical quality control (i.e., controlled variability around a target mean) over a process. We consider the end-to-end streaming of multimedia content over the internet as the process to be controlled. First, at each client, we measure process performance and apply statistical quality control (SQC) with respect to application-level requirements. Then, we guide an adaptive rate control (ARC) problem at the server based on the statistical significance of trends and departures on these measurements. We show this scheme facilitates handling of heterogeneous media. Last, because SPC is designed to monitor long-term process performance, we show that our online SPC scheme could be used to adapt to various degrees of long-term (network) variability (i.e., statistically significant process shifts as opposed to short-term random fluctuations). We develop several examples and analyze its statistical behavior and guarantees.

  20. Prism adaptation in virtual and natural contexts: Evidence for a flexible adaptive process.

    PubMed

    Veilleux, Louis-Nicolas; Proteau, Luc

    2015-01-01

    Prism exposure when aiming at a visual target in a virtual condition (e.g., when the hand is represented by a video representation) produces no or only small adaptations (after-effects), whereas prism exposure in a natural condition produces large after-effects. Some researchers suggested that this difference may arise from distinct adaptive processes, but other studies suggested a unique process. The present study reconciled these conflicting interpretations. Forty participants were divided into two groups: One group used visual feedback of their hand (natural context), and the other group used computer-generated representational feedback (virtual context). Visual feedback during adaptation was concurrent or terminal. All participants underwent laterally displacing prism perturbation. The results showed that the after-effects were twice as large in the "natural context" than in the "virtual context". No significant differences were observed between the concurrent and terminal feedback conditions. The after-effects generalized to untested targets and workspace. These results suggest that prism adaptation in virtual and natural contexts involves the same process. The smaller after-effects in the virtual context suggest that the depth of adaptation is a function of the degree of convergence between the proprioceptive and visual information that arises from the hand. PMID:25338188

  1. Cone photoreceptor definition on adaptive optics retinal imaging

    PubMed Central

    Muthiah, Manickam Nick; Gias, Carlos; Chen, Fred Kuanfu; Zhong, Joe; McClelland, Zoe; Sallo, Ferenc B; Peto, Tunde; Coffey, Peter J; da Cruz, Lyndon

    2014-01-01

    Aims To quantitatively analyse cone photoreceptor matrices on images captured on an adaptive optics (AO) camera and assess their correlation to well-established parameters in the retinal histology literature. Methods High resolution retinal images were acquired from 10 healthy subjects, aged 20–35 years old, using an AO camera (rtx1, Imagine Eyes, France). Left eye images were captured at 5° of retinal eccentricity, temporal to the fovea for consistency. In three subjects, images were also acquired at 0, 2, 3, 5 and 7° retinal eccentricities. Cone photoreceptor density was calculated following manual and automated counting. Inter-photoreceptor distance was also calculated. Voronoi domain and power spectrum analyses were performed for all images. Results At 5° eccentricity, the cone density (cones/mm2 mean±SD) was 15.3±1.4×103 (automated) and 13.9±1.0×103 (manual) and the mean inter-photoreceptor distance was 8.6±0.4 μm. Cone density decreased and inter-photoreceptor distance increased with increasing retinal eccentricity from 2 to 7°. A regular hexagonal cone photoreceptor mosaic pattern was seen at 2, 3 and 5° of retinal eccentricity. Conclusions Imaging data acquired from the AO camera match cone density, intercone distance and show the known features of cone photoreceptor distribution in the pericentral retina as reported by histology, namely, decreasing density values from 2 to 7° of eccentricity and the hexagonal packing arrangement. This confirms that AO flood imaging provides reliable estimates of pericentral cone photoreceptor distribution in normal subjects. PMID:24729030

  2. Adaptive stereo medical image watermarking using non-corresponding blocks.

    PubMed

    Mohaghegh, H; Karimi, N; Soroushmehr, S M R; Samavi, S; Najarian, K

    2015-08-01

    Today with the advent of technology in different medical imaging fields, the use of stereoscopic images has increased. Furthermore, with the rapid growth in telemedicine for remote diagnosis, treatment, and surgery, there is a need for watermarking. This is for copyright protection and tracking of digital media. Also, the efficient use of bandwidth for transmission of such data is another concern. In this paper an adaptive watermarking scheme is proposed that considers human visual system in depth perception. Our proposed scheme modifies maximum singular values of wavelet coefficients of stereo pair for embedding watermark bits. Experimental results show high 3D visual quality of watermarked video frames. Moreover, comparison with a compatible state of the art method shows that the proposed method is highly robust against attacks such as AWGN, salt and pepper noise, and JPEG compression. PMID:26737224

  3. Fourier transform digital holographic adaptive optics imaging system

    PubMed Central

    Liu, Changgeng; Yu, Xiao; Kim, Myung K.

    2013-01-01

    A Fourier transform digital holographic adaptive optics imaging system and its basic principles are proposed. The CCD is put at the exact Fourier transform plane of the pupil of the eye lens. The spherical curvature introduced by the optics except the eye lens itself is eliminated. The CCD is also at image plane of the target. The point-spread function of the system is directly recorded, making it easier to determine the correct guide-star hologram. Also, the light signal will be stronger at the CCD, especially for phase-aberration sensing. Numerical propagation is avoided. The sensor aperture has nothing to do with the resolution and the possibility of using low coherence or incoherent illumination is opened. The system becomes more efficient and flexible. Although it is intended for ophthalmic use, it also shows potential application in microscopy. The robustness and feasibility of this compact system are demonstrated by simulations and experiments using scattering objects. PMID:23262541

  4. Medical Image Processing Using Real-Time Optical Fourier Technique

    NASA Astrophysics Data System (ADS)

    Rao, D. V. G. L. N.; Panchangam, Appaji; Sastry, K. V. L. N.; Material Science Team

    2001-03-01

    Optical Image Processing Techniques are inherently fast in view of parallel processing. A self-adaptive Optical Fourier Processing system using photo induced dichroism in a Bacteriorhodopsin film was experimentally demonstrated for medical image processing. Application of this powerful analog all-optical interactive technique for cancer diagnostics is illustrated with mammograms and Pap smears. Micro calcification clusters buried in surrounding tissue showed up clearly in the processed image. By playing with one knob, which rotates the analyzer in the optical system, either the micro calcification clusters or the surrounding dense tissue can be selectively displayed. Bacteriorhodopsin films are stable up to 140^oC and environmental friendly. As no interference is involved in the experiments, vibration isolation and even a coherent light source are not required. It may be possible to develop a low-cost rugged battery operated portable signal-enhancing magnifier.

  5. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  6. Noninvasive imaging of the human rod photoreceptor mosaic using a confocal adaptive optics scanning ophthalmoscope

    PubMed Central

    Dubra, Alfredo; Sulai, Yusufu; Norris, Jennifer L.; Cooper, Robert F.; Dubis, Adam M.; Williams, David R.; Carroll, Joseph

    2011-01-01

    The rod photoreceptors are implicated in a number of devastating retinal diseases. However, routine imaging of these cells has remained elusive, even with the advent of adaptive optics imaging. Here, we present the first in vivo images of the contiguous rod photoreceptor mosaic in nine healthy human subjects. The images were collected with three different confocal adaptive optics scanning ophthalmoscopes at two different institutions, using 680 and 775 nm superluminescent diodes for illumination. Estimates of photoreceptor density and rod:cone ratios in the 5°–15° retinal eccentricity range are consistent with histological findings, confirming our ability to resolve the rod mosaic by averaging multiple registered images, without the need for additional image processing. In one subject, we were able to identify the emergence of the first rods at approximately 190 μm from the foveal center, in agreement with previous histological studies. The rod and cone photoreceptor mosaics appear in focus at different retinal depths, with the rod mosaic best focus (i.e., brightest and sharpest) being at least 10 μm shallower than the cones at retinal eccentricities larger than 8°. This study represents an important step in bringing high-resolution imaging to bear on the study of rod disorders. PMID:21750765

  7. Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging

    NASA Astrophysics Data System (ADS)

    Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong

    2015-02-01

    Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.

  8. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGESBeta

    Collette, R.; King, J.; Keiser, Jr., D.; Miller, B.; Madden, J.; Schulthess, J.

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  9. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  10. Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images

    PubMed Central

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-01-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  11. Extended adaptive filtering for wide-angle SAR image formation

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Roberts, William; Li, Jian

    2005-05-01

    For two-dimensional (2-D) spectral analysis, the adaptive filtering based technologies, such as CAPON and APES (Amplitude and Phase EStimation), are developed under the implicit assumption that the data sets are rectangular. However, in real SAR applications, especially for the wide-angle cases, the collected data sets are always non-rectangular. This raises the problem of how to extend the original adaptive filtering based algorithms for such kind of scenarios. In this paper, we propose an extended adaptive filtering (EAF) approach, which includes Extended APES (E-APES) and Extended CAPON (E-CAPON), for arbitrarily shaped 2-D data. The EAF algorithms adopt a missing-data approach where the unavailable data samples close to the collected data set are assumed missing. Using a group of filter-banks with varying sizes, these algorithms are non-iterative and do not require the estimation of the unavailable samples. The improved imaging results of the proposed algorithms are demonstrated by applying them to two different SAR data sets.

  12. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  13. Feature-specific imaging: Extensions to adaptive object recognition and active illumination based scene reconstruction

    NASA Astrophysics Data System (ADS)

    Baheti, Pawan K.

    Computational imaging (CI) systems are hybrid imagers in which the optical and post-processing sub-systems are jointly optimized to maximize the task-specific performance. In this dissertation we consider a form of CI system that measures the linear projections (i.e., features) of the scene optically, and it is commonly referred to as feature-specific imaging (FSI). Most of the previous work on FSI has been concerned with image reconstruction. Previous FSI techniques have also been non-adaptive and restricted to the use of ambient illumination. We consider two novel extensions of the FSI system in this work. We first present an adaptive feature-specific imaging (AFSI) system and consider its application to a face-recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. We present both statistical and information-theoretic adaptation mechanisms for the AFSI system. The sequential hypothesis testing framework is used to determine the number of measurements required for achieving a specified misclassification probability. We demonstrate that AFSI system requires significantly fewer measurements than static-FSI (SFSI) and conventional imaging at low signal-to-noise ratio (SNR). We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage. Experimental results validating the AFSI system are presented. Next we present a FSI system based on the use of structured light. Feature measurements are obtained by projecting spatially structured illumination onto an object and collecting all of the reflected light onto a single photodetector. We refer to this system as feature-specific structured imaging (FSSI). Principal component features are used to define the illumination patterns. The optimal LMMSE operator is used to generate object estimates from the measurements. We demonstrate that this new imaging approach reduces imager complexity and provides improved image

  14. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  15. A synoptic description of coal basins via image processing

    NASA Technical Reports Server (NTRS)

    Farrell, K. W., Jr.; Wherry, D. B.

    1978-01-01

    An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology.

  16. Adaptive Optics and Lucky Imager (AOLI): presentation and first light

    NASA Astrophysics Data System (ADS)

    Velasco, S.; Rebolo, R.; Mackay, C.; Oscoz, A.; King, D. L.; Crass, J.; Díaz-Sánchez, A.; Femenía, B.; González-Escalera, V.; Labadie, L.; López, R. L.; Pérez Garrido, A.; Puga, M.; Rodríguez-Ramos, L. F.; Zuther, J.

    2015-05-01

    In this paper we present the Adaptive Optics Lucky Imager (AOLI), a state-of-the-art instrument which makes use of two well proved techniques for extremely high spatial resolution with ground-based telescopes: Lucky Imaging (LI) and Adaptive Optics (AO). AOLI comprises an AO system, including a low order non-linear curvature wavefront sensor together with a 241 actuators deformable mirror, a science array of four 1024x1024 EMCCDs, allowing a 120×120 down to 36×36" field of view, a calibration subsystem and a powerful LI software. Thanks to the revolutionary WFS, AOLI shall have the capability of using faint reference stars (I˜16.5-17.5), enabling it to be used over a much wider part of the sky than with common Shack-Hartmann AO systems. This instrument saw first light in September 2013 at William Herschel Telescope. Although the instrument was not complete, these commissioning demonstrated its feasibility, obtaining a FWHM for the best PSF of 0.151±0.005" and a plate scale of 55.0±0.3 {mas} {pix}^{-1}. Those observations served us to prove some characteristics of the interesting multiple T Tauri system LkHα 262-263, finding it to be gravitationally bounded. This interesting multiple system mixes the presence of proto-planetary discs, one proved to be double, and the first-time optically resolved pair LkHα 263AB (0.42" separation).

  17. Adaptive Optics Imaging Survey of Luminous Infrared Galaxies

    SciTech Connect

    Laag, E A; Canalizo, G; van Breugel, W; Gates, E L; de Vries, W; Stanford, S A

    2006-03-13

    We present high resolution imaging observations of a sample of previously unidentified far-infrared galaxies at z < 0.3. The objects were selected by cross-correlating the IRAS Faint Source Catalog with the VLA FIRST catalog and the HST Guide Star Catalog to allow for adaptive optics observations. We found two new ULIGs (with L{sub FIR} {ge} 10{sup 12} L{sub {circle_dot}}) and 19 new LIGs (with L{sub FIR} {ge} 10{sup 11} L{sub {circle_dot}}). Twenty of the galaxies in the sample were imaged with either the Lick or Keck adaptive optics systems in H or K{prime}. Galaxy morphologies were determined using the two dimensional fitting program GALFIT and the residuals examined to look for interesting structure. The morphologies reveal that at least 30% are involved in tidal interactions, with 20% being clear mergers. An additional 50% show signs of possible interaction. Line ratios were used to determine powering mechanism; of the 17 objects in the sample showing clear emission lines--four are active galactic nuclei and seven are starburst galaxies. The rest exhibit a combination of both phenomena.

  18. Adaptation of commercial microscopes for advanced imaging applications

    NASA Astrophysics Data System (ADS)

    Brideau, Craig; Poon, Kelvin; Stys, Peter

    2015-03-01

    Today's commercially available microscopes offer a wide array of options to accommodate common imaging experiments. Occasionally, an experimental goal will require an unusual light source, filter, or even irregular sample that is not compatible with existing equipment. In these situations the ability to modify an existing microscopy platform with custom accessories can greatly extend its utility and allow for experiments not possible with stock equipment. Light source conditioning/manipulation such as polarization, beam diameter or even custom source filtering can easily be added with bulk components. Custom and after-market detectors can be added to external ports using optical construction hardware and adapters. This paper will present various examples of modifications carried out on commercial microscopes to address both atypical imaging modalities and research needs. Violet and near-ultraviolet source adaptation, custom detection filtering, and laser beam conditioning and control modifications will be demonstrated. The availability of basic `building block' parts will be discussed with respect to user safety, construction strategies, and ease of use.

  19. Extreme learning machine and adaptive sparse representation for image classification.

    PubMed

    Cao, Jiuwen; Zhang, Kai; Luo, Minxia; Yin, Chun; Lai, Xiaoping

    2016-09-01

    Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. Those two methods, however, have their respective drawbacks, e.g., in general, ELM is known to be less robust to noise while SRC is known to be time-consuming. Consequently, ELM and SRC complement each other in computational complexity and classification accuracy. In order to unify such mutual complementarity and thus further enhance the classification performance, we propose an efficient hybrid classifier to exploit the advantages of ELM and SRC in this paper. More precisely, the proposed classifier consists of two stages: first, an ELM network is trained by supervised learning. Second, a discriminative criterion about the reliability of the obtained ELM output is adopted to decide whether the query image can be correctly classified or not. If the output is reliable, the classification will be performed by ELM; otherwise the query image will be fed to SRC. Meanwhile, in the stage of SRC, a sub-dictionary that is adaptive to the query image instead of the entire dictionary is extracted via the ELM output. The computational burden of SRC thus can be reduced. Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency. PMID:27389571

  20. Adaptive optics scanning laser ophthalmoscope imaging: technology update

    PubMed Central

    Merino, David; Loza-Alvarez, Pablo

    2016-01-01

    Adaptive optics (AO) retinal imaging has become very popular in the past few years, especially within the ophthalmic research community. Several different retinal techniques, such as fundus imaging cameras or optical coherence tomography systems, have been coupled with AO in order to produce impressive images showing individual cell mosaics over different layers of the in vivo human retina. The combination of AO with scanning laser ophthalmoscopy has been extensively used to generate impressive images of the human retina with unprecedented resolution, showing individual photoreceptor cells, retinal pigment epithelium cells, as well as microscopic capillary vessels, or the nerve fiber layer. Over the past few years, the technique has evolved to develop several different applications not only in the clinic but also in different animal models, thanks to technological developments in the field. These developments have specific applications to different fields of investigation, which are not limited to the study of retinal diseases but also to the understanding of the retinal function and vision science. This review is an attempt to summarize these developments in an understandable and brief manner in order to guide the reader into the possibilities that AO scanning laser ophthalmoscopy offers, as well as its limitations, which should be taken into account when planning on using it. PMID:27175057

  1. Adaptive optics scanning laser ophthalmoscope imaging: technology update.

    PubMed

    Merino, David; Loza-Alvarez, Pablo

    2016-01-01

    Adaptive optics (AO) retinal imaging has become very popular in the past few years, especially within the ophthalmic research community. Several different retinal techniques, such as fundus imaging cameras or optical coherence tomography systems, have been coupled with AO in order to produce impressive images showing individual cell mosaics over different layers of the in vivo human retina. The combination of AO with scanning laser ophthalmoscopy has been extensively used to generate impressive images of the human retina with unprecedented resolution, showing individual photoreceptor cells, retinal pigment epithelium cells, as well as microscopic capillary vessels, or the nerve fiber layer. Over the past few years, the technique has evolved to develop several different applications not only in the clinic but also in different animal models, thanks to technological developments in the field. These developments have specific applications to different fields of investigation, which are not limited to the study of retinal diseases but also to the understanding of the retinal function and vision science. This review is an attempt to summarize these developments in an understandable and brief manner in order to guide the reader into the possibilities that AO scanning laser ophthalmoscopy offers, as well as its limitations, which should be taken into account when planning on using it. PMID:27175057

  2. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  3. UAV multiple image dense matching based on self-adaptive patch

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Ding, Yazhou; Xiao, Xiongwu; Guo, Bingxuan; Li, Deren; Yang, Nan; Zhang, Weilong; Huang, Xiangxiang; Li, Linhui; Peng, Zhe; Pan, Fei

    2015-12-01

    This article using some state-of-art multi-view dense matching methods for reference, proposes an UAV multiple image dense matching algorithm base on Self-Adaptive patch (UAV-AP) in view of the specialty of UAV images. The main idea of matching propagating based on Self-Adaptive patch is to build patches centered by seed points which are already matched. The extent and figure of the patches can adapt to the terrain relief automatically: when the surface is smooth, the extent of the patch would become bigger to cover the whole smooth terrain; while the terrain is very rough, the extent of the patch would become smaller to describe the details of the surface. With this approach, the UAV image sequences and the given or previously triangulated orientation elements are taken as inputs. The main processing procedures are as follows: (1) multi-view initial feature matching, (2) matching propagating based on Self-Adaptive patch, (3) filtering the erroneous matching points. Finally, the algorithm outputs a dense colored point cloud. Experiments indicate that this method surpassed the existing related algorithm in efficiency and the matching precision is also quite ideal.

  4. Sequential Processes In Image Generation.

    ERIC Educational Resources Information Center

    Kosslyn, Stephen M.; And Others

    1988-01-01

    Results of three experiments are reported, which indicate that images of simple two-dimensional patterns are formed sequentially. The subjects included 48 undergraduates and 16 members of the Harvard University (Cambridge, Mass.) community. A new objective methodology indicates that images of complex letters require more time to generate. (TJH)

  5. Adaptive windowing in contrast-enhanced intravascular ultrasound imaging.

    PubMed

    Lindsey, Brooks D; Martin, K Heath; Jiang, Xiaoning; Dayton, Paul A

    2016-08-01

    Intravascular ultrasound (IVUS) is one of the most commonly-used interventional imaging techniques and has seen recent innovations which attempt to characterize the risk posed by atherosclerotic plaques. One such development is the use of microbubble contrast agents to image vasa vasorum, fine vessels which supply oxygen and nutrients to the walls of coronary arteries and typically have diameters less than 200μm. The degree of vasa vasorum neovascularization within plaques is positively correlated with plaque vulnerability. Having recently presented a prototype dual-frequency transducer for contrast agent-specific intravascular imaging, here we describe signal processing approaches based on minimum variance (MV) beamforming and the phase coherence factor (PCF) for improving the spatial resolution and contrast-to-tissue ratio (CTR) in IVUS imaging. These approaches are examined through simulations, phantom studies, ex vivo studies in porcine arteries, and in vivo studies in chicken embryos. In phantom studies, PCF processing improved CTR by a mean of 4.2dB, while combined MV and PCF processing improved spatial resolution by 41.7%. Improvements of 2.2dB in CTR and 37.2% in resolution were observed in vivo. Applying these processing strategies can enhance image quality in conventional B-mode IVUS or in contrast-enhanced IVUS, where signal-to-noise ratio is relatively low and resolution is at a premium. PMID:27161022

  6. Image processing on the IBM personal computer

    NASA Technical Reports Server (NTRS)

    Myers, H. J.; Bernstein, R.

    1985-01-01

    An experimental, personal computer image processing system has been developed which provides a variety of processing functions in an environment that connects programs by means of a 'menu' for both casual and experienced users. The system is implemented by a compiled BASIC program that is coupled to assembly language subroutines. Image processing functions encompass subimage extraction, image coloring, area classification, histogramming, contrast enhancement, filtering, and pixel extraction.

  7. Semi-automated Image Processing for Preclinical Bioluminescent Imaging

    PubMed Central

    Slavine, Nikolai V; McColl, Roderick W

    2015-01-01

    Objective Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. Methods In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. Results We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. Conclusion The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment. PMID:26618187

  8. Image Processing: A State-of-the-Art Way to Learn Science.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    Teachers participating in the Image Processing for Teaching Process, begun at the University of Arizona's Lunar and Planetary Laboratory in 1989, find this technology ideal for encouraging student discovery, promoting constructivist science or math experiences, and adapting in classrooms. Because image processing is not a computerized text, it…

  9. Image processing applied to laser cladding process

    SciTech Connect

    Meriaudeau, F.; Truchetet, F.

    1996-12-31

    The laser cladding process, which consists of adding a melt powder to a substrate in order to improve or change the behavior of the material against corrosion, fatigue and so on, involves a lot of parameters. In order to perform good tracks some parameters need to be controlled during the process. The authors present here a low cost performance system using two CCD matrix cameras. One camera provides surface temperature measurements while the other gives information relative to the powder distribution or geometric characteristics of the tracks. The surface temperature (thanks to Beer Lambert`s law) enables one to detect variations in the mass feed rate. Using such a system the authors are able to detect fluctuation of 2 to 3g/min in the mass flow rate. The other camera gives them information related to the powder distribution, a simple algorithm applied to the data acquired from the CCD matrix camera allows them to see very weak fluctuations within both gaz flux (carriage or protection gaz). During the process, this camera is also used to perform geometric measurements. The height and the width of the track are obtained in real time and enable the operator to find information related to the process parameters such as the speed processing, the mass flow rate. The authors display the result provided by their system in order to enhance the efficiency of the laser cladding process. The conclusion is dedicated to a summary of the presented works and the expectations for the future.

  10. Blurred Star Image Processing for Star Sensors under Dynamic Conditions

    PubMed Central

    Zhang, Weina; Quan, Wei; Guo, Lei

    2012-01-01

    The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666

  11. Behavioral training promotes multiple adaptive processes following acute hearing loss

    PubMed Central

    Keating, Peter; Rosenior-Patten, Onayomi; Dahmen, Johannes C; Bell, Olivia; King, Andrew J

    2016-01-01

    The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI: http://dx.doi.org/10.7554/eLife.12264.001 PMID:27008181

  12. Assessing the Process of Marital Adaptation: The Marital Coping Inventory.

    ERIC Educational Resources Information Center

    Zborowski, Lydia L.; Berman, William H.

    Studies on coping with life events identify marriage as a distinct situational stressor, in which a wide range of coping strategies specific to the marital relationship are employed. This study examined the process of martial adaptation, identified as a style of coping, in 116 married volunteers. Subjects completed a demographic questionnaire, the…

  13. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  14. Adaptive control of surface finish in automated turning processes

    NASA Astrophysics Data System (ADS)

    García-Plaza, E.; Núñez, P. J.; Martín, A. R.; Sanz, A.

    2012-04-01

    The primary aim of this study was to design and develop an on-line control system of finished surfaces in automated machining process by CNC turning. The control system consisted of two basic phases: during the first phase, surface roughness was monitored through cutting force signals; the second phase involved a closed-loop adaptive control system based on data obtained during the monitoring of the cutting process. The system ensures that surfaces roughness is maintained at optimum values by adjusting the feed rate through communication with the PLC of the CNC machine. A monitoring and adaptive control system has been developed that enables the real-time monitoring of surface roughness during CNC turning operations. The system detects and prevents faults in automated turning processes, and applies corrective measures during the cutting process that raise quality and reliability reducing the need for quality control.

  15. An adaptable image retrieval system with relevance feedback using kernel machines and selective sampling.

    PubMed

    Azimi-Sadjadi, Mahmood R; Salazar, Jaime; Srinivasan, Saravanakumar

    2009-07-01

    This paper presents an adaptable content-based image retrieval (CBIR) system developed using regularization theory, kernel-based machines, and Fisher information measure. The system consists of a retrieval subsystem that carries out similarity matching using image-dependant information, multiple mapping subsystems that adaptively modify the similarity measures, and a relevance feedback mechanism that incorporates user information. The adaptation process drives the retrieval error to zero in order to exactly meet either an existing multiclass classification model or the user high-level concepts using reference-model or relevance feedback learning, respectively. To facilitate the selection of the most informative query images during relevance feedback learning a new method based upon the Fisher information is introduced. Model-reference and relevance feedback learning mechanisms are thoroughly tested on a domain-specific image database that encompasses a wide range of underwater objects captured using an electro-optical sensor. Benchmarking results with two other relevance feedback learning methods are also provided. PMID:19447718

  16. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images

    PubMed Central

    Cunefare, David; Cooper, Robert F.; Higgins, Brian; Katz, David F.; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-01-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice’s coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice’s coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images. PMID:27231641

  17. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images. PMID:27231641

  18. Noise correlation-based adaptive polarimetric image representation for contrast enhancement of a polarized beacon in fog

    NASA Astrophysics Data System (ADS)

    Panigrahi, Swapnesh; Fade, Julien; Alouini, Mehdi

    2015-10-01

    We show the use of a simplified snapshot polarimetric camera along with an adaptive image processing for optimal detection of a polarized light beacon through fog. The adaptive representation is derived using theoretical noise analysis of the data at hand and is shown to be optimal in the Maximum likelihood sense. We report that the contrast enhancing optimal representation that depends on the background noise correlation differs in general from standard representations like polarimetric difference image or polarization filtered image. Lastly, we discuss a detection strategy to reduce the false positive counts.

  19. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  20. Local adaptive approach toward segmentation of microscopic images of activated sludge flocs

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Lo, Po Kim; Yap, Vooi Voon

    2015-11-01

    Activated sludge process is a widely used method to treat domestic and industrial effluents. The conditions of activated sludge wastewater treatment plant (AS-WWTP) are related to the morphological properties of flocs (microbial aggregates) and filaments, and are required to be monitored for normal operation of the plant. Image processing and analysis is a potential time-efficient monitoring tool for AS-WWTPs. Local adaptive segmentation algorithms are proposed for bright-field microscopic images of activated sludge flocs. Two basic modules are suggested for Otsu thresholding-based local adaptive algorithms with irregular illumination compensation. The performance of the algorithms has been compared with state-of-the-art local adaptive algorithms of Sauvola, Bradley, Feng, and c-mean. The comparisons are done using a number of region- and nonregion-based metrics at different microscopic magnifications and quantification of flocs. The performance metrics show that the proposed algorithms performed better and, in some cases, were comparable to the state-of the-art algorithms. The performance metrics were also assessed subjectively for their suitability for segmentations of activated sludge images. The region-based metrics such as false negative ratio, sensitivity, and negative predictive value gave inconsistent results as compared to other segmentation assessment metrics.

  1. Combining advanced imaging processing and low cost remote imaging capabilities

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew J.; McQuiddy, Brian

    2008-04-01

    Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

  2. Adaptive automatic segmentation of Leishmaniasis parasite in Indirect Immunofluorescence images.

    PubMed

    Ouertani, F; Amiri, H; Bettaib, J; Yazidi, R; Ben Salah, A

    2014-01-01

    This paper describes the first steps for the automation of the serum titration process. In fact, this process requires an Indirect Immunofluorescence (IIF) diagnosis automation. We deal with the initial phase that represents the fluorescence images segmentation. Our approach consists of three principle stages: (1) a color based segmentation which aims at extracting the fluorescent foreground based on k-means clustering, (2) the segmentation of the fluorescent clustered image, and (3) a region-based feature segmentation, intended to remove the fluorescent noisy regions and to locate fluorescent parasites. We evaluated the proposed method on 40 IIF images. Experimental results show that such a method provides reliable and robust automatic segmentation of fluorescent Promastigote parasite. PMID:25571049

  3. Adaptive filtering of radar images for autofocus applications

    NASA Technical Reports Server (NTRS)

    Stiles, J. A.; Frost, V. S.; Gardner, J. S.; Eland, D. R.; Shanmugam, K. S.; Holtzman, J. C.

    1981-01-01

    Autofocus techniques are being designed at the Jet Propulsion Laboratory to automatically choose the filter parameters (i.e., the focus) for the digital synthetic aperture radar correlator; currently, processing relies upon interaction with a human operator who uses his subjective assessment of the quality of the processed SAR data. Algorithms were devised applying image cross-correlation to aid in the choice of filter parameters, but this method also has its drawbacks in that the cross-correlation result may not be readily interpretable. Enhanced performance of the cross-correlation techniques of JPL was hypothesized given that the images to be cross-correlated were first filtered to improve the signal-to-noise ratio for the pair of scenes. The results of experiments are described and images are shown.

  4. Adapting the BIMA Image Pipeline for Miriad Using Python

    NASA Astrophysics Data System (ADS)

    Mehringer, D. M.; Plante, R.

    2004-07-01

    Through our experience using AIPS++ in the BIMA Image Pipeline, we found that a sophisticated scripting environment is crucial for supporting an automated pipeline. Miriad V4, now in development, introduces support for calling Miriad programs from a Python environment (referred to as Pyramid). We are creating processing recipes using Miriad through Python that can be used with the BIMA Image Pipeline. As part of this work, we are prototyping tools that could be integrated into Pyramid. These include two Python classes, UVDataset and Image for examining the contents of Miriad datasets. These simple tools have allowed us to recast our Pipeline using Miriad in only a couple of months. Python recipes are used for such things as determining line-free channels for continuum subtraction and determining if data will benefit from self-calibration. We are currently using the Pipeline to do massive processing of hundreds of tracks of archival data using NCSA's Teraflop IA-32 Linux cluster.

  5. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  6. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  7. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  8. A multiscale contrast direction adaptation approach for the fusion of multispectral and multifocus infrared images

    NASA Astrophysics Data System (ADS)

    Karali, A. O.; Cakir, Serdar; Aytaç, Tayfun

    2015-10-01

    Infrared (IR) cameras are widely used in latest surveillance systems because spectral characteristics of objects provide valuable information for object detection and identification. To assist the surveillance system operator and automatic image processing tasks, fusing images in IR band is proposed as a solution to increase situational awareness and different fusion techniques are developed for this purpose. Proposed techniques are generally developed for specific scenarios because image content may vary dramatically depending on the spectral range, the optical properties of the cameras, the spectral characteristics of the scene, and the spatial resolution of the interested targets in the scene. A general purpose IR image fusion technique that is suitable for real-time applications is proposed. The proposed technique can support different scenarios by applying a multiscale detail detection and can be applied to images captured from different spectral regions of the spectrum by adaptively adjusting the contrast direction through cross checking between the source images. The feasibility of the proposed algorithm is demonstrated on registered multi-spectral and multi-focus IR images. Fusion results are presented and the performance of the proposed technique is compared with the baseline fusion methods through objective and subjective tests. The technique outperforms baseline methods in the subjective tests and provide promising results in objective quality metrics with an acceptable computational load. Besides, the proposed technique preserves object details and prevents undesired artifacts better than the baseline techniques in the image fusion scenario that contains four source images.

  9. An adaptive algorithm for removing the blocking artifacts in block-transform coded images

    NASA Astrophysics Data System (ADS)

    Yang, Jingzhong; Ma, Zheng

    2005-11-01

    JPEG and MPEG compression standards adopt the macro block encoding approach, but this method can lead to annoying blocking effects-the artificial rectangular discontinuities in the decoded images. Many powerful postprocessing algorithms have been developed to remove the blocking effects. However, all but the simplest algorithms can be too complex for real-time applications, such as video decoding. We propose an adaptive and easy-to-implement algorithm that can removes the artificial discontinuities. This algorithm contains two steps, firstly, to perform a fast linear smoothing of the block edge's pixel by average value replacement strategy, the next one, by comparing the variance that is derived from the difference of the processed image with a reasonable threshold, to determine whether the first step should stop or not. Experiments have proved that this algorithm can quickly remove the artificial discontinuities without destroying the key information of the decoded images, it is robust to different images and transform strategy.

  10. Adaptive optics SLO/OCT for 3D imaging of human photoreceptors in vivo

    PubMed Central

    Felberer, Franz; Kroisamer, Julia-Sophie; Baumann, Bernhard; Zotter, Stefan; Schmidt-Erfurth, Ursula; Hitzenberger, Christoph K.; Pircher, Michael

    2014-01-01

    We present a new instrument that is capable of imaging human photoreceptors in three dimensions. To achieve high lateral resolution, the system incorporates an adaptive optics system. The high axial resolution is achieved through the implementation of optical coherence tomography (OCT). The instrument records simultaneously both, scanning laser ophthalmoscope (SLO) and OCT en-face images, with a pixel to pixel correspondence. The information provided by the SLO is used to correct for transverse eye motion in post-processing. In order to correct for axial eye motion, the instrument is equipped with a high speed axial eye tracker. In vivo images of foveal cones as well as images recorded at an eccentricity from the fovea showing cones and rods are presented. PMID:24575339

  11. Adaptive Optics Retinal Imaging – Clinical Opportunities and Challenges

    PubMed Central

    Carroll, Joseph; Kay, David B.; Scoles, Drew; Dubra, Alfredo; Lombardo, Marco

    2014-01-01

    The array of therapeutic options available to clinicians for treating retinal disease is expanding. With these advances comes the need for better understanding of the etiology of these diseases on a cellular level as well as improved non-invasive tools for identifying the best candidates for given therapies and monitoring the efficacy of those therapies. While spectral domain optical coherence tomography (SD-OCT) offers a widely available tool for clinicians to assay the living retina, it suffers from poor lateral resolution due to the eye’s monochromatic aberrations. Adaptive optics (AO) is a technique to compensate for the eye’s aberrations and provide nearly diffraction-limited resolution. The result is the ability to visualize the living retina with cellular resolution. While AO is unquestionably a powerful research tool, many clinicians remain undecided on the clinical potential of AO imaging – putting many at a crossroads with respect to adoption of this technology. This review will briefly summarize the current state of AO retinal imaging, discuss current as well as future clinical applications of AO retinal imaging, and finally provide some discussion of research needs to facilitate more widespread clinical use. PMID:23621343

  12. EUV imaging experiment of an adaptive optics telescope

    NASA Astrophysics Data System (ADS)

    Kitamoto, S.; Shibata, T.; Takenaka, E.; Yoshida, M.; Murakami, H.; Shishido, Y.; Gotoh, N.; Nagasaki, K.; Takei, D.; Morii, M.

    2009-08-01

    We report an experimental result of our normal-incident EUV telescope tuned to a 13.5 nm band, with an adaptive optics. The optics consists of a spherical primary mirror and a secondary mirror. Both are coated by Mo/Si multilayer. The diameter of the primary and the secondary mirrors are 80 mm and 55mm, respectively. The secondary mirror is a deformable mirror with 31 bimorph-piezo electrodes. The EUV from a laser plasma source was exposed to a Ni mesh with 31 micro-m wires. The image of this mesh was obtained by a backilluminated CCD. The reference wave was made by an optical laser source with 1 μm pin-hole. We measure the wave form of this reference wave and control the secondary mirror to get a good EUV image. Since the paths of EUV and the optical light for the reference were different from each other, we modify the target wave from to control the deformable mirror, as the EUV image is best. The higher order Zernike components of the target wave form, as well as the tilts and focus components, were added to the reference wave form made by simply calculated. We confirmed the validity of this control and performed a 2.1 arc-sec resolution.

  13. Binary adaptive semi-global matching based on image edges

    NASA Astrophysics Data System (ADS)

    Hu, Han; Rzhanov, Yuri; Hatcher, Philip J.; Bergeron, R. D.

    2015-07-01

    Image-based modeling and rendering is currently one of the most challenging topics in Computer Vision and Photogrammetry. The key issue here is building a set of dense correspondence points between two images, namely dense matching or stereo matching. Among all dense matching algorithms, Semi-Global Matching (SGM) is arguably one of the most promising algorithms for real-time stereo vision. Compared with global matching algorithms, SGM aggregates matching cost from several (eight or sixteen) directions rather than only the epipolar line using Dynamic Programming (DP). Thus, SGM eliminates the classical "streaking problem" and greatly improves its accuracy and efficiency. In this paper, we aim at further improvement of SGM accuracy without increasing the computational cost. We propose setting the penalty parameters adaptively according to image edges extracted by edge detectors. We have carried out experiments on the standard Middlebury stereo dataset and evaluated the performance of our modified method with the ground truth. The results have shown a noticeable accuracy improvement compared with the results using fixed penalty parameters while the runtime computational cost was not increased.

  14. ADAPTIVE OPTICS IMAGES OF KEPLER OBJECTS OF INTEREST

    SciTech Connect

    Adams, E. R.; Dupree, A. K.; Ciardi, D. R.; Gautier, T. N. III; Kulesa, C.; McCarthy, D.

    2012-08-15

    All transiting planets are at risk of contamination by blends with nearby, unresolved stars. Blends dilute the transit signal, causing the planet to appear smaller than it really is, or produce a false-positive detection when the target star is blended with eclipsing binary stars. This paper reports on high spatial-resolution adaptive optics images of 90 Kepler planetary candidates. Companion stars are detected as close as 0.''1 from the target star. Images were taken in the near-infrared (J and Ks bands) with ARIES on the MMT and PHARO on the Palomar Hale 200 inch telescope. Most objects (60%) have at least one star within 6'' separation and a magnitude difference of 9. Eighteen objects (20%) have at least one companion within 2'' of the target star; six companions (7%) are closer than 0.''5. Most of these companions were previously unknown, and the associated planetary candidates should receive additional scrutiny. Limits are placed on the presence of additional companions for every system observed, which can be used to validate planets statistically using the BLENDER method. Validation is particularly critical for low-mass, potentially Earth-like worlds, which are not detectable with current-generation radial velocity techniques. High-resolution images are thus a crucial component of any transit follow-up program.

  15. Frequency Adaptability and Waveform Design for OFDM Radar Space-Time Adaptive Processing

    SciTech Connect

    Sen, Satyabrata; Glover, Charles Wayne

    2012-01-01

    We propose an adaptive waveform design technique for an orthogonal frequency division multiplexing (OFDM) radar signal employing a space-time adaptive processing (STAP) technique. We observe that there are inherent variabilities of the target and interference responses in the frequency domain. Therefore, the use of an OFDM signal can not only increase the frequency diversity of our system, but also improve the target detectability by adaptively modifying the OFDM coefficients in order to exploit the frequency-variabilities of the scenario. First, we formulate a realistic OFDM-STAP measurement model considering the sparse nature of the target and interference spectra in the spatio-temporal domain. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. With numerical examples we demonstrate that the resultant OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  16. Hard real-time beam scheduler enables adaptive images in multi-probe systems

    NASA Astrophysics Data System (ADS)

    Tobias, Richard J.

    2014-03-01

    Real-time embedded-system concepts were adapted to allow an imaging system to responsively control the firing of multiple probes. Large-volume, operator-independent (LVOI) imaging would increase the diagnostic utility of ultrasound. An obstacle to this innovation is the inability of current systems to drive multiple transducers dynamically. Commercial systems schedule scanning with static lists of beams to be fired and processed; here we allow an imager to adapt to changing beam schedule demands, as an intelligent response to incoming image data. An example of scheduling changes is demonstrated with a flexible duplex mode two-transducer application mimicking LVOI imaging. Embedded-system concepts allow an imager to responsively control the firing of multiple probes. Operating systems use powerful dynamic scheduling algorithms, such as fixed priority preemptive scheduling. Even real-time operating systems lack the timing constraints required for ultrasound. Particularly for Doppler modes, events must be scheduled with sub-nanosecond precision, and acquired data is useless without this requirement. A successful scheduler needs unique characteristics. To get close to what would be needed in LVOI imaging, we show two transducers scanning different parts of a subjects leg. When one transducer notices flow in a region where their scans overlap, the system reschedules the other transducer to start flow mode and alter its beams to get a view of the observed vessel and produce a flow measurement. The second transducer does this in a focused region only. This demonstrates key attributes of a successful LVOI system, such as robustness against obstructions and adaptive self-correction.

  17. Adaptive HIFU noise cancellation for simultaneous therapy and imaging using an integrated HIFU/imaging transducer

    PubMed Central

    Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk

    2010-01-01

    It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than −40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of −20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe ‘ripples’ when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm × 20.0 mm dimensions

  18. Adaptive HIFU noise cancellation for simultaneous therapy and imaging using an integrated HIFU/imaging transducer.

    PubMed

    Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk

    2010-04-01

    It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than -40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of -20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe 'ripples' when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm x 20.0 mm dimensions could

  19. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  20. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  1. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  2. Utilizing image processing techniques to compute herbivory.

    PubMed

    Olson, T E; Barlow, V M

    2001-01-01

    Leafy spurge (Euphorbia esula L. sensu lato) is a perennial weed species common to the north-central United States and southern Canada. The plant is a foreign species toxic to cattle. Spurge infestation can reduce cattle carrying capacity by 50 to 75 percent [1]. University of Wyoming Entomology doctoral candidate Vonny Barlow is conducting research in the area of biological control of leafy spurge via the Aphthona nigriscutis Foudras flea beetle. He is addressing the question of variability within leafy spurge and its potential impact on flea beetle herbivory. One component of Barlow's research consists of measuring the herbivory of leafy spurge plant specimens after introducing adult beetles. Herbivory is the degree of consumption of the plant's leaves and was measured in two different manners. First, Barlow assigned each consumed plant specimen a visual rank from 1 to 5. Second, image processing techniques were applied to "before" and "after" images of each plant specimen in an attempt to more accurately quantify herbivory. Standardized techniques were used to acquire images before and after beetles were allowed to feed on plants for a period of 12 days. Matlab was used as the image processing tool. The image processing algorithm allowed the user to crop the portion of the "before" image containing only plant foliage. Then Matlab cropped the "after" image with the same dimensions, converted the images from RGB to grayscale. The grayscale image was converted to binary based on a user defined threshold value. Finally, herbivory was computed based on the number of black pixels in the "before" and "after" images. The image processing results were mixed. Although, this image processing technique depends on user input and non-ideal images, the data is useful to Barlow's research and offers insight into better imaging systems and processing algorithms. PMID:11347423

  3. Image restoration of the open-loop adaptive optics retinal imaging system based on optical transfer function analysis

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Qi, Yue; Li, Dayu; Xia, Mingliang; Xuan, Li

    2013-07-01

    The residual aberrations of the adaptive optics retinal imaging system will decrease the quality of the retinal images. To overcome this obstacle, we found that the optical transfer function (OTF) of the adaptive optics retinal imaging system can be described as the Levy stable distribution. Then a new method is introduced to estimate the OTF of the open-loop adaptive optics system, based on analyzing the residual aberrations of the open-loop adaptive optics system in the residual aberrations measuring mode. At last, the estimated OTF is applied to restore the retinal images of the open-loop adaptive optics retinal imaging system. The contrast and resolution of the restored image is significantly improved with the Laplacian sum (LS) from 0.0785 to 0.1480 and gray mean grads (GMG) from 0.0165 to 0.0306.

  4. How Digital Image Processing Became Really Easy

    NASA Astrophysics Data System (ADS)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  5. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  6. Epidemic processes over adaptive state-dependent networks

    NASA Astrophysics Data System (ADS)

    Ogura, Masaki; Preciado, Victor M.

    2016-06-01

    In this paper we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting lower bound is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures.

  7. Epidemic processes over adaptive state-dependent networks.

    PubMed

    Ogura, Masaki; Preciado, Victor M

    2016-06-01

    In this paper we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting lower bound is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures. PMID:27415289

  8. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  9. Anthropological methods of optical image processing

    NASA Astrophysics Data System (ADS)

    Ginzburg, V. M.

    1981-12-01

    Some applications of the new method for optical image processing, based on a prior separation of informative elements (IE) with the help of a defocusing equal to the average eye defocusing, considered in a previous paper, are described. A diagram of a "drawing" robot with the use of defocusing and other mechanisms of the human visual system (VS) is given. Methods of narrowing the TV channel bandwidth and elimination of noises in computer image processing by prior image defocusing are described.

  10. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  11. Super Resolution Reconstruction Based on Adaptive Detail Enhancement for ZY-3 Satellite Images

    NASA Astrophysics Data System (ADS)

    Zhu, Hong; Song, Weidong; Tan, Hai; Wang, Jingxue; Jia, Di

    2016-06-01

    Super-resolution reconstruction of sequence remote sensing image is a technology which handles multiple low-resolution satellite remote sensing images with complementary information and obtains one or more high resolution images. The cores of the technology are high precision matching between images and high detail information extraction and fusion. In this paper puts forward a new image super resolution model frame which can adaptive multi-scale enhance the details of reconstructed image. First, the sequence images were decomposed into a detail layer containing the detail information and a smooth layer containing the large scale edge information by bilateral filter. Then, a texture detail enhancement function was constructed to promote the magnitude of the medium and small details. Next, the non-redundant information of the super reconstruction was obtained by differential processing of the detail layer, and the initial super resolution construction result was achieved by interpolating fusion of non-redundant information and the smooth layer. At last, the final reconstruction image was acquired by executing a local optimization model on the initial constructed image. Experiments on ZY-3 satellite images of same phase and different phase show that the proposed method can both improve the information entropy and the image details evaluation standard comparing with the interpolation method, traditional TV algorithm and MAP algorithm, which indicate that our method can obviously highlight image details and contains more ground texture information. A large number of experiment results reveal that the proposed method is robust and universal for different kinds of ZY-3 satellite images.

  12. Adaptive tracking of maneuvering targets based on IR image data

    NASA Astrophysics Data System (ADS)

    Maybeck, Peter S.

    1989-06-01

    The capability of tracking dynamic targets from forward looking infrared (FLIR) measurements was improved substantially by replacing standard correlation trackers with adaptive extended Kalman filters or enhanced correlator/Kalman filter combinations. A tracker able to handle multiple hot-spot targets, in which digital and/or optical signal processing is employed on the FLIR data to identify the underlying target shape is investigated. Furthermore, multiple model adaptive filtering is investigated as a means of changing the field-of-view as well as the tracker bandwidth when target acceleration can vary over a wide range. Enhancements are developed and analyzed: (1) allowing some of the elemental filters within the adaptive algorithm to have rectangular fields-of-view and to be tuned for target dynamics that are harsher in one direction than others; (2) considering both Gauss-Markov acceleration models and constant turn-rate models for target dynamics; and (3) devising an initial target acquisition algorithm to remove important biases in the estimated target template to be used within the tracker. The performance potential of such a tracking algorithm is shown to be substantial.

  13. Microstructure of subretinal drusenoid deposits revealed by adaptive optics imaging.

    PubMed

    Meadway, Alexander; Wang, Xiaolin; Curcio, Christine A; Zhang, Yuhua

    2014-03-01

    Subretinal drusenoid deposits (SDD), a recently recognized lesion associated with progression of age-related macular degeneration, were imaged with adaptive optics scanning laser ophthalmoscopy (AO-SLO) and optical coherence tomography (AO-OCT). AO-SLO revealed a distinct en face structure of stage 3 SDD, showing a hyporeflective annulus surrounded reflective core packed with hyperreflective dots bearing a superficial similarity to the photoreceptors in the unaffected retina. However, AO-OCT suggested that the speckled appearance over the SDD rendered by AO-SLO was the lesion material itself, rather than photoreceptors. AO-OCT assists proper interpretation and understanding of the SDD structure and the lesions' impact on surrounding photoreceptors produced by AO-SLO and vice versa. PMID:24688808

  14. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  15. Adaptive optics retinal imaging in the living mouse eye

    PubMed Central

    Geng, Ying; Dubra, Alfredo; Yin, Lu; Merigan, William H.; Sharma, Robin; Libby, Richard T.; Williams, David R.

    2012-01-01

    Correction of the eye’s monochromatic aberrations using adaptive optics (AO) can improve the resolution of in vivo mouse retinal images [Biss et al., Opt. Lett. 32(6), 659 (2007) and Alt et al., Proc. SPIE 7550, 755019 (2010)], but previous attempts have been limited by poor spot quality in the Shack-Hartmann wavefront sensor (SHWS). Recent advances in mouse eye wavefront sensing using an adjustable focus beacon with an annular beam profile have improved the wavefront sensor spot quality [Geng et al., Biomed. Opt. Express 2(4), 717 (2011)], and we have incorporated them into a fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). The performance of the instrument was tested on the living mouse eye, and images of multiple retinal structures, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and fluorescently labeled ganglion cells were obtained. The in vivo transverse and axial resolutions of the fluorescence channel of the AOSLO were estimated from the full width half maximum (FWHM) of the line and point spread functions (LSF and PSF), and were found to be better than 0.79 μm ± 0.03 μm (STD)(45% wider than the diffraction limit) and 10.8 μm ± 0.7 μm (STD)(two times the diffraction limit), respectively. The axial positional accuracy was estimated to be 0.36 μm. This resolution and positional accuracy has allowed us to classify many ganglion cell types, such as bistratified ganglion cells, in vivo. PMID:22574260

  16. SUPRIM: easily modified image processing software.

    PubMed

    Schroeter, J P; Bretaudiere, J P

    1996-01-01

    A flexible, modular software package intended for the processing of electron microscopy images is presented. The system consists of a set of image processing tools or filters, written in the C programming language, and a command line style user interface based on the UNIX shell. The pipe and filter structure of UNIX and the availability of command files in the form of shell scripts eases the construction of complex image processing procedures from the simpler tools. Implementation of a new image processing algorithm in SUPRIM may often be performed by construction of a new shell script, using already existing tools. Currently, the package has been used for two- and three-dimensional image processing and reconstruction of macromolecules and other structures of biological interest. PMID:8742734

  17. Thermodynamic Costs of Information Processing in Sensory Adaptation

    PubMed Central

    Sartori, Pablo; Granger, Léo; Lee, Chiu Fan; Horowitz, Jordan M.

    2014-01-01

    Biological sensory systems react to changes in their surroundings. They are characterized by fast response and slow adaptation to varying environmental cues. Insofar as sensory adaptive systems map environmental changes to changes of their internal degrees of freedom, they can be regarded as computational devices manipulating information. Landauer established that information is ultimately physical, and its manipulation subject to the entropic and energetic bounds of thermodynamics. Thus the fundamental costs of biological sensory adaptation can be elucidated by tracking how the information the system has about its environment is altered. These bounds are particularly relevant for small organisms, which unlike everyday computers, operate at very low energies. In this paper, we establish a general framework for the thermodynamics of information processing in sensing. With it, we quantify how during sensory adaptation information about the past is erased, while information about the present is gathered. This process produces entropy larger than the amount of old information erased and has an energetic cost bounded by the amount of new information written to memory. We apply these principles to the E. coli's chemotaxis pathway during binary ligand concentration changes. In this regime, we quantify the amount of information stored by each methyl group and show that receptors consume energy in the range of the information-theoretic minimum. Our work provides a basis for further inquiries into more complex phenomena, such as gradient sensing and frequency response. PMID:25503948

  18. The Role of Familiarity on Viewpoint Adaptation for Self-Face and Other-Face Images.

    PubMed

    Nevi, Andrea; Cicali, Filippo; Caudek, Corrado

    2016-07-01

    An adaptation method was used to investigate whether self-face processing is dissociable from general face processing. We explored the viewpoint aftereffect with face images having different degrees of familiarity (never-before-seen faces, recently familiarized faces, personally familiar faces, and the participant's own face). A face viewpoint aftereffect occurs after prolonged viewing of a face viewed from one side, with the result that the perceived viewing direction of a subsequently presented face image shown near the frontal view is biased in a direction which is the opposite of the adapting orientation. We found that (1) the magnitude of the viewpoint aftereffect depends on the level of familiarity of the adapting and test faces, (2) a cross-identity transfer of the viewpoint aftereffect is found between all categories of faces, but not between an unfamiliar adaptor face and the self-face test, and (3) learning affects the processing of the self-face in greater measure than any other category of faces. These results highlight the importance of familiarity on the face aftereffects, but they also suggest the possibility of separate representations for the self-face, on the one side, and for highly familiar faces, on the other. PMID:27165718

  19. Multiscale registration of planning CT and daily cone beam CT images for adaptive radiation therapy

    SciTech Connect

    Paquin, Dana; Levy, Doron; Xing Lei

    2009-01-15

    Adaptive radiation therapy (ART) is the incorporation of daily images in the radiotherapy treatment process so that the treatment plan can be evaluated and modified to maximize the amount of radiation dose to the tumor while minimizing the amount of radiation delivered to healthy tissue. Registration of planning images with daily images is thus an important component of ART. In this article, the authors report their research on multiscale registration of planning computed tomography (CT) images with daily cone beam CT (CBCT) images. The multiscale algorithm is based on the hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [Multiscale Model. Simul. 2(4), pp. 554-579 (2004)]. Registration is achieved by decomposing the images to be registered into a series of scales using the (BV, L{sup 2}) decomposition and initially registering the coarsest scales of the image using a landmark-based registration algorithm. The resulting transformation is then used as a starting point to deformably register the next coarse scales with one another. This procedure is iterated at each stage using the transformation computed by the previous scale registration as the starting point for the current registration. The authors present the results of studies of rectum, head-neck, and prostate CT-CBCT registration, and validate their registration method quantitatively using synthetic results in which the exact transformations our known, and qualitatively using clinical deformations in which the exact results are not known.

  20. Adaptive image warping for hole prevention in 3D view synthesis.

    PubMed

    Plath, Nils; Knorr, Sebastian; Goldmann, Lutz; Sikora, Thomas

    2013-09-01

    Increasing popularity of 3D videos calls for new methods to ease the conversion process of existing monocular video to stereoscopic or multi-view video. A popular way to convert video is given by depth image-based rendering methods, in which a depth map that is associated with an image frame is used to generate a virtual view. Because of the lack of knowledge about the 3D structure of a scene and its corresponding texture, the conversion of 2D video, inevitably, however, leads to holes in the resulting 3D image as a result of newly-exposed areas. The conversion process can be altered such that no holes become visible in the resulting 3D view by superimposing a regular grid over the depth map and deforming it. In this paper, an adaptive image warping approach as an improvement to the regular approach is proposed. The new algorithm exploits the smoothness of a typical depth map to reduce the complexity of the underlying optimization problem that is necessary to find the deformation, which is required to prevent holes. This is achieved by splitting a depth map into blocks of homogeneous depth using quadtrees and running the optimization on the resulting adaptive grid. The results show that this approach leads to a considerable reduction of the computational complexity while maintaining the visual quality of the synthesized views. PMID:23782807

  1. Medical image classification using spatial adjacent histogram based on adaptive local binary patterns.

    PubMed

    Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling

    2016-05-01

    Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. PMID:27058283

  2. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection. PMID:25968031

  3. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  4. Adoption: biological and social processes linked to adaptation.

    PubMed

    Grotevant, Harold D; McDermott, Jennifer M

    2014-01-01

    Children join adoptive families through domestic adoption from the public child welfare system, infant adoption through private agencies, and international adoption. Each pathway presents distinctive developmental opportunities and challenges. Adopted children are at higher risk than the general population for problems with adaptation, especially externalizing, internalizing, and attention problems. This review moves beyond the field's emphasis on adoptee-nonadoptee differences to highlight biological and social processes that affect adaptation of adoptees across time. The experience of stress, whether prenatal, postnatal/preadoption, or during the adoption transition, can have significant impacts on the developing neuroendocrine system. These effects can contribute to problems with physical growth, brain development, and sleep, activating cascading effects on social, emotional, and cognitive development. Family processes involving contact between adoptive and birth family members, co-parenting in gay and lesbian adoptive families, and racial socialization in transracially adoptive families affect social development of adopted children into adulthood. PMID:24016275

  5. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  6. Image processing for drawing recognition

    NASA Astrophysics Data System (ADS)

    Feyzkhanov, Rustem; Zhelavskaya, Irina

    2014-03-01

    The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.

  7. Parallel digital signal processing architectures for image processing

    NASA Astrophysics Data System (ADS)

    Kshirsagar, Shirish P.; Hartley, David A.; Harvey, David M.; Hobson, Clifford A.

    1994-10-01

    This paper describes research into a high speed image processing system using parallel digital signal processors for the processing of electro-optic images. The objective of the system is to reduce the processing time of non-contact type inspection problems including industrial and medical applications. A single processor can not deliver sufficient processing power required for the use of applications hence, a MIMD system is designed and constructed to enable fast processing of electro-optic images. The Texas Instruments TMS320C40 digital signal processor is used due to its high speed floating point CPU and the support for the parallel processing environment. A custom designed VISION bus is provided to transfer images between processors. The system is being applied for solder joint inspection of high technology printed circuit boards.

  8. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  9. Addressing the need for adaptable decision processes within healthcare software.

    PubMed

    Miseldine, P; Taleb-Bendiab, A; England, D; Randles, M

    2007-03-01

    In the healthcare sector, where the decisions made by software aid in the direct treatment of patients, software requires high levels of assurance to ensure the correct interpretation of the tasks it is automating. This paper argues that introducing adaptable decision processes within eHealthcare initiatives can reduce software-maintenance complexity and, due to the instantaneous, distributed deployment of decision models, allow for quicker updates of current best practice, thereby improving patient care. The paper provides a description of a collection of technologies and tools that can be used to provide the required adaptation in a decision process. These tools are evaluated against two case studies that individually highlight different requirements in eHealthcare: a breast-cancer decision-support system, in partnership with several of the UK's leading cancer hospitals, and a dental triage in partnership with the Royal Liverpool Hospital which both show how the complete process flow of software can be abstracted and adapted, and the benefits that arise as a result. PMID:17365643

  10. Applications of Digital Image Processing 11

    NASA Technical Reports Server (NTRS)

    Cho, Y. -C.

    1988-01-01

    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  11. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  12. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  13. Adaptive-weighted bilateral filtering and other pre-processing techniques for optical coherence tomography.

    PubMed

    Anantrasirichai, N; Nicholson, Lindsay; Morgan, James E; Erchova, Irina; Mortlock, Katie; North, Rachel V; Albon, Julie; Achim, Alin

    2014-09-01

    This paper presents novel pre-processing image enhancement algorithms for retinal optical coherence tomography (OCT). These images contain a large amount of speckle causing them to be grainy and of very low contrast. To make these images valuable for clinical interpretation, we propose a novel method to remove speckle, while preserving useful information contained in each retinal layer. The process starts with multi-scale despeckling based on a dual-tree complex wavelet transform (DT-CWT). We further enhance the OCT image through a smoothing process that uses a novel adaptive-weighted bilateral filter (AWBF). This offers the desirable property of preserving texture within the OCT image layers. The enhanced OCT image is then segmented to extract inner retinal layers that contain useful information for eye research. Our layer segmentation technique is also performed in the DT-CWT domain. Finally we describe an OCT/fundus image registration algorithm which is helpful when two modalities are used together for diagnosis and for information fusion. PMID:25034317

  14. Coherence gated wavefront sensorless adaptive optics for two photon excited fluorescence retinal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Cua, Michelle; Bonora, Stefano; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.

    2016-03-01

    We present a novel system for adaptive optics two photon imaging. We utilize the bandwidth of the femtosecond excitation beam to perform coherence gated imaging (OCT) of the sample. The location of the focus is directly observable in the cross sectional OCT images, and adjusted to the desired depth plane. Next, using real time volumetric OCT, we perform Wavefront Sensorless Adaptive Optics (WSAO) aberration correction using a multi-element adaptive lens capable of correcting up to 4th order Zernike polynomials. The aberration correction is performed based on an image quality metric, for example intensity. The optimization time is limited only by the OCT acquisition rate, and takes ~30s. Following aberration correction, two photon fluorescence images are acquired, and compared to results without adaptive optics correction. This technique is promising for multiphoton imaging in multi-layered, scattering samples such as eye and brain, in which traditional wavefront sensing and guide-star sensorless adaptive optics approaches may not be suitable.

  15. Three-dimensional color image processing procedures using DSP

    NASA Astrophysics Data System (ADS)

    Rosales, Alberto J.; Ponomaryov, Volodymyr I.; Gallegos-Funes, Francisco

    2007-02-01

    Processing of the vector image information is seemed very important because multichannel sensors used in different applications. We introduce novel algorithms to process color images that are based on order statistics and vectorial processing techniques: Video Adaptive Vector Directional (VAVDF) and the Vector Median M-type K-Nearest Neighbour (VMMKNN) Filters presented in this paper. It has been demonstrated that novel algorithms suppress effectively an impulsive noise in comparison with different other methods in 3D video color sequences. Simulation results have been obtained using video sequences "Miss America" and "Flowers", which were corrupted by noise. The filters: KNNF, VGVDF, VMMKNN, and, finally the proposed VAVDATM have been investigated. The criteria PSNR, MAE and NCD demonstrate that the VAVDATM filter has shown the best performances in each a criterion when intensity of noise is more that 7-10%. An attempt to realize the real-time processing on the DSP is presented for median type algorithms techniques.

  16. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  17. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  18. Adaptive Optics for Satellite Imaging and Space Debris Ranging

    NASA Astrophysics Data System (ADS)

    Bennet, F.; D'Orgeville, C.; Price, I.; Rigaut, F.; Ritchie, I.; Smith, C.

    Earth's space environment is becoming crowded and at risk of a Kessler syndrome, and will require careful management for the future. Modern low noise high speed detectors allow for wavefront sensing and adaptive optics (AO) in extreme circumstances such as imaging small orbiting bodies in Low Earth Orbit (LEO). The Research School of Astronomy and Astrophysics (RSAA) at the Australian National University have been developing AO systems for telescopes between 1 and 2.5m diameter to image and range orbiting satellites and space debris. Strehl ratios in excess of 30% can be achieved for targets in LEO with an AO loop running at 2kHz, allowing the resolution of small features (<30cm) and the capability to determine object shape and spin characteristics. The AO system developed at RSAA consists of a high speed EMCCD Shack-Hartmann wavefront sensor, a deformable mirror (DM), and realtime computer (RTC), and an imaging camera. The system works best as a laser guide star system but will also function as a natural guide star AO system, with the target itself being the guide star. In both circumstances tip-tilt is provided by the target on the imaging camera. The fast tip-tilt modes are not corrected optically, and are instead removed by taking images at a moderate speed (>30Hz) and using a shift and add algorithm. This algorithm can also incorporate lucky imaging to further improve the final image quality. A similar AO system for space debris ranging is also in development in collaboration with Electro Optic Systems (EOS) and the Space Environment Management Cooperative Research Centre (SERC), at the Mount Stromlo Observatory in Canberra, Australia. The system is designed for an AO corrected upward propagated 1064nm pulsed laser beam, from which time of flight information is used to precisely range the target. A 1.8m telescope is used for both propagation and collection of laser light. A laser guide star, Shack-Hartmann wavefront sensor, and DM are used for high order

  19. Detection of respiratory motion in fluoroscopic images for adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Moser, T.; Biederer, J.; Nill, S.; Remmert, G.; Bendl, R.

    2008-06-01

    Respiratory motion limits the potential of modern high-precision radiotherapy techniques such as IMRT and particle therapy. Due to the uncertainty of tumour localization, the ability of achieving dose conformation often cannot be exploited sufficiently, especially in the case of lung tumours. Various methods have been proposed to track the position of tumours using external signals, e.g. with the help of a respiratory belt or by observing external markers. Retrospectively gated time-resolved x-ray computed tomography (4D CT) studies prior to therapy can be used to register the external signals with the tumour motion. However, during treatment the actual motion of internal structures may be different. Direct control of tissue motion by online imaging during treatment promises more precise information. On the other hand, it is more complex, since a larger amount of data must be processed in order to determine the motion. Three major questions arise from this issue. Firstly, can the motion that has occurred be precisely determined in the images? Secondly, how large must, respectively how small can, the observed region be chosen to get a reliable signal? Finally, is it possible to predict the proximate tumour location within sufficiently short acquisition times to make this information available for gating irradiation? Based on multiple studies on a porcine lung phantom, we have tried to examine these questions carefully. We found a basic characteristic of the breathing cycle in images using the image similarity method normalized mutual information. Moreover, we examined the performance of the calculations and proposed an image-based gating technique. In this paper, we present the results and validation performed with a real patient data set. This allows for the conclusion that it is possible to build up a gating system based on image data, solely, or (at least in avoidance of an exceeding exposure dose) to verify gates proposed by the various external systems.

  20. Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking

    PubMed Central

    Ferguson, R. Daniel; Zhong, Zhangyi; Hammer, Daniel X.; Mujat, Mircea; Patel, Ankit H.; Deng, Cong; Zou, Weiyao; Burns, Stephen A.

    2010-01-01

    We have developed a new, unified implementation of the adaptive optics scanning laser ophthalmoscope (AOSLO) incorporating a wide-field line-scanning ophthalmoscope (LSO) and a closed-loop optical retinal tracker. AOSLO raster scans are deflected by the integrated tracking mirrors so that direct AOSLO stabilization is automatic during tracking. The wide-field imager and large-spherical-mirror optical interface design, as well as a large-stroke deformable mirror (DM), enable the AOSLO image field to be corrected at any retinal coordinates of interest in a field of >25 deg. AO performance was assessed by imaging individuals with a range of refractive errors. In most subjects, image contrast was measurable at spatial frequencies close to the diffraction limit. Closed-loop optical (hardware) tracking performance was assessed by comparing sequential image series with and without stabilization. Though usually better than 10 μm rms, or 0.03 deg, tracking does not yet stabilize to single cone precision but significantly improves average image quality and increases the number of frames that can be successfully aligned by software-based post-processing methods. The new optical interface allows the high-resolution imaging field to be placed anywhere within the wide field without requiring the subject to re-fixate, enabling easier retinal navigation and faster, more efficient AOSLO montage capture and stitching. PMID:21045887

  1. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  2. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  3. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  4. Accelerated image processing on FPGAs.

    PubMed

    Draper, Bruce A; Beveridge, J Ross; Böhm, A P Willem; Ross, Charles; Chawathe, Monica

    2003-01-01

    The Cameron project has developed a language called single assignment C (SA-C), and a compiler for mapping image-based applications written in SA-C to field programmable gate arrays (FPGAs). The paper tests this technology by implementing several applications in SA-C and compiling them to an Annapolis Microsystems (AMS) WildStar board with a Xilinx XV2000E FPGA. The performance of these applications on the FPGA is compared to the performance of the same applications written in assembly code or C for an 800 MHz Pentium III. (Although no comparison across processors is perfect, these chips were the first of their respective classes fabricated at 0.18 microns, and are therefore of comparable ages.) We find that applications written in SA-C and compiled to FPGAs are between 8 and 800 times faster than the equivalent program run on the Pentium III. PMID:18244709

  5. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  6. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  7. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  8. Adaptive neural information processing with dynamical electrical synapses

    PubMed Central

    Xiao, Lei; Zhang, Dan-ke; Li, Yuan-qing; Liang, Pei-ji; Wu, Si

    2013-01-01

    The present study investigates a potential computational role of dynamical electrical synapses in neural information process. Compared with chemical synapses, electrical synapses are more efficient in modulating the concerted activity of neurons. Based on the experimental data, we propose a phenomenological model for short-term facilitation of electrical synapses. The model satisfactorily reproduces the phenomenon that the neuronal correlation increases although the neuronal firing rates attenuate during the luminance adaptation. We explore how the stimulus information is encoded in parallel by firing rates and correlated activity of neurons, and find that dynamical electrical synapses mediate a transition from the firing rate code to the correlation one during the luminance adaptation. The latter encodes the stimulus information by using the concerted, but lower neuronal firing rate, and hence is economically more efficient. PMID:23596413

  9. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  10. Construction and solution of an adaptive image-restoration model for removing blur and mixed noise

    NASA Astrophysics Data System (ADS)

    Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun

    2016-03-01

    We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.

  11. Checking Fits With Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Davis, R. M.; Geaslen, W. D.

    1988-01-01

    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  12. Recent developments in digital image processing at the Image Processing Laboratory of JPL.

    NASA Technical Reports Server (NTRS)

    O'Handley, D. A.

    1973-01-01

    Review of some of the computer-aided digital image processing techniques recently developed. Special attention is given to mapping and mosaicking techniques and to preliminary developments in range determination from stereo image pairs. The discussed image processing utilization areas include space, biomedical, and robotic applications.

  13. Adaptive Optics Images of the Galactic Center: Using Empirical Noise-maps to Optimize Image Analysis

    NASA Astrophysics Data System (ADS)

    Albers, Saundra; Witzel, Gunther; Meyer, Leo; Sitarski, Breann; Boehle, Anna; Ghez, Andrea M.

    2015-01-01

    Adaptive Optics images are one of the most important tools in studying our Galactic Center. In-depth knowledge of the noise characteristics is crucial to optimally analyze this data. Empirical noise estimates - often represented by a constant value for the entire image - can be greatly improved by computing the local detector properties and photon noise contributions pixel by pixel. To comprehensively determine the noise, we create a noise model for each image using the three main contributors—photon noise of stellar sources, sky noise, and dark noise. We propagate the uncertainties through all reduction steps and analyze the resulting map using Starfinder. The estimation of local noise properties helps to eliminate fake detections while improving the detection limit of fainter sources. We predict that a rigorous understanding of noise allows a more robust investigation of the stellar dynamics in the center of our Galaxy.

  14. A self-adaptive mean-shift segmentation approach based on graph theory for high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Luwan; Han, Ling; Ning, Xiaohong

    2015-12-01

    An auto new segmentation approach based on graph theory which named self-adaptive mean-shift for high-resolution remote sensing images was proposed in this paper. This approach could overcome some defects that classic Mean-Shift must determine the fixed bandwidth through trial many times, and could effectively distinguish the difference between different features in the texture rich region. Segmentation experiments were processed with WorldView satellite image. The results show that the presented method is adaptive, and its speed and precision can satisfy application, so it is a robust automatic segmentation algorithm.

  15. Image processing technique based on image understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-12-01

    Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.

  16. Adaptive lifting scheme with sparse criteria for image coding

    NASA Astrophysics Data System (ADS)

    Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2012-12-01

    Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an ℓ 1 criterion instead of an ℓ 2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted ℓ 1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.

  17. Adaptive optics images. III. 87 Kepler objects of interest

    SciTech Connect

    Dressing, Courtney D.; Dupree, Andrea K.; Adams, Elisabeth R.; Kulesa, Craig; McCarthy, Don

    2014-11-01

    The Kepler mission has revolutionized our understanding of exoplanets, but some of the planet candidates identified by Kepler may actually be astrophysical false positives or planets whose transit depths are diluted by the presence of another star. Adaptive optics images made with ARIES at the MMT of 87 Kepler Objects of Interest place limits on the presence of fainter stars in or near the Kepler aperture. We detected visual companions within 1'' for 5 stars, between 1'' and 2'' for 7 stars, and between 2'' and 4'' for 15 stars. For those systems, we estimate the brightness of companion stars in the Kepler bandpass and provide approximate corrections to the radii of associated planet candidates due to the extra light in the aperture. For all stars observed, we report detection limits on the presence of nearby stars. ARIES is typically sensitive to stars approximately 5.3 Ks magnitudes fainter than the target star within 1'' and approximately 5.7 Ks magnitudes fainter within 2'', but can detect stars as faint as ΔKs = 7.5 under ideal conditions.

  18. Adaptive Optics Images. III. 87 Kepler Objects of Interest

    NASA Astrophysics Data System (ADS)

    Dressing, Courtney D.; Adams, Elisabeth R.; Dupree, Andrea K.; Kulesa, Craig; McCarthy, Don

    2014-11-01

    The Kepler mission has revolutionized our understanding of exoplanets, but some of the planet candidates identified by Kepler may actually be astrophysical false positives or planets whose transit depths are diluted by the presence of another star. Adaptive optics images made with ARIES at the MMT of 87 Kepler Objects of Interest place limits on the presence of fainter stars in or near the Kepler aperture. We detected visual companions within 1'' for 5 stars, between 1'' and 2'' for 7 stars, and between 2'' and 4'' for 15 stars. For those systems, we estimate the brightness of companion stars in the Kepler bandpass and provide approximate corrections to the radii of associated planet candidates due to the extra light in the aperture. For all stars observed, we report detection limits on the presence of nearby stars. ARIES is typically sensitive to stars approximately 5.3\\, {{Ks}} magnitudes fainter than the target star within 1'' and approximately 5.7\\, {{Ks}} magnitudes fainter within 2'', but can detect stars as faint as ΔKs = 7.5 under ideal conditions. Observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona.

  19. Patient-adaptive lesion metabolism analysis by dynamic PET images.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2012-01-01

    Dynamic PET imaging provides important spatial-temporal information for metabolism analysis of organs and tissues, and generates a great reference for clinical diagnosis and pharmacokinetic analysis. Due to poor statistical properties of the measurement data in low count dynamic PET acquisition and disturbances from surrounding tissues, identifying small lesions inside the human body is still a challenging issue. The uncertainties in estimating the arterial input function will also limit the accuracy and reliability of the metabolism analysis of lesions. Furthermore, the sizes of the patients and the motions during PET acquisition will yield mismatch against general purpose reconstruction system matrix, this will also affect the quantitative accuracy of metabolism analyses of lesions. In this paper, we present a dynamic PET metabolism analysis framework by defining a patient adaptive system matrix to improve the lesion metabolism analysis. Both patient size information and potential small lesions are incorporated by simulations of phantoms of different sizes and individual point source responses. The new framework improves the quantitative accuracy of lesion metabolism analysis, and makes the lesion identification more precisely. The requirement of accurate input functions is also reduced. Experiments are conducted on Monte Carlo simulated data set for quantitative analysis and validation, and on real patient scans for assessment of clinical potential. PMID:23286175

  20. Nanosecond image processing using stimulated photon echoes.

    PubMed

    Xu, E Y; Kröll, S; Huestis, D L; Kachru, R; Kim, M K

    1990-05-15

    Processing of two-dimensional images on a nanosecond time scale is demonstrated using the stimulated photon echoes in a rare-earth-doped crystal (0.1 at. % Pr(3+):LaF(3)). Two spatially encoded laser pulses (pictures) resonant with the (3)P(0)-(3)H(4) transition of Pr(3+) were stored by focusing the image pulses sequentially into the Pr(3+):LaF(3) crystal. The stored information is retrieved and processed by a third read pulse, generating the echo that is the spatial convolution or correlation of the input images. Application of this scheme to high-speed pattern recognition is discussed. PMID:19768008

  1. New approach for underwater imaging and processing

    NASA Astrophysics Data System (ADS)

    Wen, Yanan; Tian, Weijian; Zheng, Bing; Zhou, Guozun; Dong, Hui; Wu, Qiong

    2014-05-01

    Due to the absorptive and scattering nature of water, the characteristic of underwater image is different with it in the air. Underwater image is characterized by their poor visibility and noise. Getting clear original image and image processing are two important problems to be solved in underwater clear vision area. In this paper a new approach technology is presented to solve these problems. Firstly, an inhomogeneous illumination method is developed to get the clear original image. Normal illumination image system and inhomogeneous illumination image system are used to capture the image in same distance. The result shows that the contrast and definition of processed image is get great improvement by inhomogeneous illumination method. Secondly, based on the theory of photon transmitted in the water and the particularity of underwater target detecting, the characters of laser scattering on underwater target surface and spatial and temporal characters of oceanic optical channel have been studied. Based on the Monte Carlo simulation, we studied how the parameters of water quality and other systemic parameters affect the light transmitting through water at spatial and temporal region and provided the theoretical sustentation of enhancing the SNR and operational distance.

  2. Image processing via ultrasonics - Status and promise

    NASA Technical Reports Server (NTRS)

    Kornreich, P. G.; Kowel, S. T.; Mahapatra, A.; Nouhi, A.

    1979-01-01

    Acousto-electric devices for electronic imaging of light are discussed. These devices are more versatile than line scan imaging devices in current use. They have the capability of presenting the image information in a variety of modes. The image can be read out in the conventional line scan mode. It can be read out in the form of the Fourier, Hadamard, or other transform. One can take the transform along one direction of the image and line scan in the other direction, or perform other combinations of image processing functions. This is accomplished by applying the appropriate electrical input signals to the device. Since the electrical output signal of these devices can be detected in a synchronous mode, substantial noise reduction is possible

  3. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  4. On Cognition, Structured Sequence Processing, and Adaptive Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Petersson, Karl Magnus

    2008-11-01

    Cognitive neuroscience approaches the brain as a cognitive system: a system that functionally is conceptualized in terms of information processing. We outline some aspects of this concept and consider a physical system to be an information processing device when a subclass of its physical states can be viewed as representational/cognitive and transitions between these can be conceptualized as a process operating on these states by implementing operations on the corresponding representational structures. We identify a generic and fundamental problem in cognition: sequentially organized structured processing. Structured sequence processing provides the brain, in an essential sense, with its processing logic. In an approach addressing this problem, we illustrate how to integrate levels of analysis within a framework of adaptive dynamical systems. We note that the dynamical system framework lends itself to a description of asynchronous event-driven devices, which is likely to be important in cognition because the brain appears to be an asynchronous processing system. We use the human language faculty and natural language processing as a concrete example through out.

  5. Adaptive ocean acoustic processing for a shallow ocean experiment

    SciTech Connect

    Candy, J.V.; Sullivan, E.J.

    1995-07-19

    A model-based approach is developed to solve an adaptive ocean acoustic signal processing problem. Here we investigate the design of model-based identifier (MBID) for a normal-mode model developed from a shallow water ocean experiment and then apply it to a set of experimental data demonstrating the feasibility of this approach. In this problem we show how the processor can be structured to estimate the horizontal wave numbers directly from measured pressure sound speed thereby eliminating the need for synthetic aperture processing or a propagation model solution. Ocean acoustic signal processing has made great strides over the past decade necessitated by the development of quieter submarines and the recent proliferation of diesel powered vessels.

  6. Adapting high-resolution speckle imaging to moving targets and platforms

    SciTech Connect

    Carrano, C J; Brase, J M

    2004-02-05

    High-resolution surveillance imaging with apertures greater than a few inches over horizontal or slant paths at optical or infrared wavelengths will typically be limited by atmospheric aberrations. With static targets and static platforms, we have previously demonstrated near-diffraction limited imaging of various targets including personnel and vehicles over horizontal and slant paths ranging from less than a kilometer to many tens of kilometers using adaptations to bispectral speckle imaging techniques. Nominally, these image processing methods require the target to be static with respect to its background during the data acquisition since multiple frames are required. To obtain a sufficient number of frames and also to allow the atmosphere to decorrelate between frames, data acquisition times on the order of one second are needed. Modifications to the original imaging algorithm will be needed to deal with situations where there is relative target to background motion. In this paper, we present an extension of these imaging techniques to accommodate mobile platforms and moving targets.

  7. An Efficient and Self-Adapted Approach to the Sharpening of Color Images

    PubMed Central

    Lee, Tien-Lin

    2013-01-01

    An efficient approach to the sharpening of color images is proposed in this paper. For this, the image to be sharpened is first transformed to the HSV color model, and then only the channel of Value will be used for the process of sharpening while the other channels are left unchanged. We then apply a proposed edge detector and low-pass filter to the channel of Value to pick out pixels around boundaries. After that, those pixels detected as around edges or boundaries are adjusted so that the boundary can be sharpened, and those nonedge pixels are kept unaltered. The increment or decrement magnitude that is to be added to those edge pixels is determined in an adaptive manner based on global statistics of the image and local statistics of the pixel to be sharpened. With the proposed approach, the discontinuities can be highlighted while most of the original information contained in the image can be retained. Finally, the adjusted channel of Value and that of Hue and Saturation will be integrated to get the sharpened color image. Extensive experiments on natural images will be given in this paper to highlight the effectiveness and efficiency of the proposed approach. PMID:24348136

  8. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  9. [Super sweet corn hybrids adaptability for industrial processing. I freezing].

    PubMed

    Alfonzo, Braunnier; Camacho, Candelario; Ortiz de Bertorelli, Ligia; De Venanzi, Frank

    2002-09-01

    With the purpose of evaluating adaptability to the freezing process of super sweet corn sh2 hybrids Krispy King, Victor and 324, 100 cobs of each type were frozen at -18 degrees C. After 120 days of storage, their chemical, microbiological and sensorial characteristics were compared with a sweet corn su. Industrial quality of the process of freezing and length and number of rows in cobs were also determined. Results revealed yields above 60% in frozen corns. Length and number of rows in cobs were acceptable. Most of the chemical characteristics of super sweet hybrids were not different from the sweet corn assayed at the 5% significance level. Moisture content and soluble solids of hybrid Victor, as well as total sugars of hybrid 324 were statistically different. All sh2 corns had higher pH values. During freezing, soluble solids concentration, sugars and acids decreased whereas pH increased. Frozen cobs exhibited acceptable microbiological rank, with low activities of mesophiles and total coliforms, absence of psychrophiles and fecal coliforms, and an appreciable amount of molds. In conclusion, sh2 hybrids adapted with no problems to the freezing process, they had lower contents of soluble solids and higher contents of total sugars, which almost doubled the amount of su corn; flavor, texture, sweetness and appearance of kernels were also better. Hybrid Victor was preferred by the evaluating panel and had an outstanding performance due to its yield and sensorial characteristics. PMID:12448345

  10. Motion compensation for adaptive horizontal line array processing

    NASA Astrophysics Data System (ADS)

    Yang, T. C.

    2003-01-01

    Large aperture horizontal line arrays have small resolution cells and can be used to separate a target signal from an interference signal by array beamforming. High-resolution adaptive array processing can be used to place a null at the interference signal so that the array gain can be much higher than that of conventional beamforming. But these nice features are significantly degraded by the source motion, which reduces the time period under which the environment can be considered stationary from the array processing point of view. For adaptive array processing, a large number of data samples are generally required to minimize the variance of the cross-spectral density, or the covariance matrix, between the array elements. For a moving source and interference, the penalty of integrating over a large number of samples is the spread of signal and interference energy to more than one or two eigenvalues. The signal and interference are no longer clearly identified by the eigenvectors and, consequently, the ability to suppress the interference suffers. We show in this paper that the effect of source motion can be compensated for the (signal) beam covariance matrix, thus allowing integration over a large number of data samples without loss in the signal beam power. We employ an equivalent of a rotating coordinate frame to track the signal bearing change and use the waveguide invariant theory to compensate the signal range change by frequency shifting.

  11. Prediction and control of chaotic processes using nonlinear adaptive networks

    SciTech Connect

    Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  12. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  13. Enhancement of Corneal Visibility in Optical Coherence Tomography Images Using Corneal Adaptive Compensation

    PubMed Central

    Girard, Michaël J. A.; Ang, Marcus; Chung, Cheuk Wang; Farook, Mohamed; Strouthidis, Nick; Mehta, Jod S.; Mari, Jean Martial

    2015-01-01

    Purpose: To improve the contrast of optical coherence tomography (OCT) images of the cornea (post processing). Methods: We have recently developed standard compensation (SC) algorithms to remove light attenuation artifacts. A more recent approach, namely adaptive compensation (AC), further limited noise overamplification within deep tissue regions. AC was shown to work efficiently when all A-scan signals were fully attenuated at high depth. But in many imaging applications (e.g., OCT imaging of the cornea), such an assumption is not satisfied, which can result in strong noise overamplification. A corneal adaptive compensation (CAC) algorithm was therefore developed to overcome such limitation. CAC benefited from local A-scan processing (rather than global as in AC) and its performance was compared with that of SC and AC using Fourier-domain OCT images of four human corneas. Results: CAC provided considerably superior image contrast improvement than SC or AC did, with excellent visibility of the corneal stroma, low noise overamplification, homogeneous signal amplification, and high contrast. Specifically, CAC provided mean interlayer contrasts (a measure of high stromal visibility and low noise) greater than 0.97, while SC and AC provided lower values ranging from 0.38 to 1.00. Conclusion: CAC provided considerable improvement compared with SC and AC by eliminating noise overamplification, while maintaining all benefits of compensation, thus making the corneal endothelium and corneal thickness easily identifiable. Translational Relevance: CAC may find wide applicability in clinical practice and could contribute to improved morphometric and biomechanical understanding of the cornea. PMID:26046005

  14. Closed-loop adaptive optics using a CMOS image quality metric sensor

    NASA Astrophysics Data System (ADS)

    Ting, Chueh; Rayankula, Aditya; Giles, Michael K.; Furth, Paul M.

    2006-08-01

    When compared to a Shack-Hartmann sensor, a CMOS image sharpness sensor has the advantage of reduced complexity in a closed-loop adaptive optics system. It also has the potential to be implemented as a smart sensor using VLSI technology. In this paper, we present a novel adaptive optics testbed that uses a CMOS sharpness imager built in the New Mexico State University (NMSU) Electro-Optics Research Laboratory (EORL). The adaptive optics testbed, which includes a CMOS image quality metric sensor and a 37-channel deformable mirror, has the capability to rapidly compensate higher-order phase aberrations. An experimental performance comparison of the pinhole image sharpness feedback method and the CMOS imager is presented. The experimental data shows that the CMOS sharpness imager works well in a closed-loop adaptive optics system. Its overall performance is better than that of the pinhole method, and it has a fast response time.

  15. Magnetic resonance imaging for adaptive cobalt tomotherapy: A proposal

    PubMed Central

    Kron, Tomas; Eyles, David; John, Schreiner L; Battista, Jerry

    2006-01-01

    . Rotational delivery is less susceptible to problems related to the use of a low energy megavoltage photon source while the helical delivery reduces the negative impact of the relatively large penumbra inherent in the use of Cobalt sources for radiotherapy. On the other hand, the use of a 60Co source ensures constant dose rate with gantry rotation and makes dose calculation in a magnetic field as easy as the range of secondary electrons is limited. The MR-integrated Cobalt tomotherapy unit, dubbed ‘MiCoTo,’ uses two independent physical principles for image acquisition and treatment delivery. It would offer excellent target definition and will allow following target motion during treatment using fast imaging techniques thus providing the best possible input for adaptive radiotherapy. As an additional bonus, quality assurance of the radiation delivery can be performed in situ using radiation sensitive gels imaged by MRI. PMID:21206640

  16. Simulation of dynamic processes with adaptive neural networks.

    SciTech Connect

    Tzanos, C. P.

    1998-02-03

    Many industrial processes are highly non-linear and complex. Their simulation with first-principle or conventional input-output correlation models is not satisfactory, either because the process physics is not well understood, or it is so complex that direct simulation is either not adequately accurate, or it requires excessive computation time, especially for on-line applications. Artificial intelligence techniques (neural networks, expert systems, fuzzy logic) or their combination with simple process-physics models can be effectively used for the simulation of such processes. Feedforward (static) neural networks (FNNs) can be used effectively to model steady-state processes. They have also been used to model dynamic (time-varying) processes by adding to the network input layer input nodes that represent values of input variables at previous time steps. The number of previous time steps is problem dependent and, in general, can be determined after extensive testing. This work demonstrates that for dynamic processes that do not vary fast with respect to the retraining time of the neural network, an adaptive feedforward neural network can be an effective simulator that is free of the complexities introduced by the use of input values at previous time steps.

  17. MEMS Deformable Mirrors for Adaptive Optics in Astronomical Imaging

    NASA Astrophysics Data System (ADS)

    Cornelissen, S.; Bierden, P. A.; Bifano, T.

    We report on the development of micro-electromechanical (MEMS) deformable mirrors designed for ground and space-based astronomical instruments intended for imaging extra-solar planets. Three different deformable mirror designs, a 1024 element continuous membrane (32x32), a 4096 element continuous membrane (64x64), and a 331 hexagonal segmented tip-tilt-piston are being produced for the Planet Imaging Concept Testbed Using a Rocket Experiment (PICTURE) program, the Gemini Planet Imaging Instrument, and the visible nulling coronograph developed at JPL for NASA's TPF mission, respectively. The design of these polysilicon, surface-micromachined MEMS deformable mirrors builds on technology that was pioneered at Boston University and has been used extensively to correct for ocular aberrations in retinal imaging systems and for compensation of atmospheric turbulence in free-space laser communication. These light-weight, low power deformable mirrors will have an active aperture of up to 25.2mm consisting of thin silicon membrane mirror supported by an array of 1024 to 4096 electrostatic actuators exhibiting no hysteresis and sub-nanometer repeatability. The continuous membrane deformable mirrors, coated with a highly reflective metal film, will be capable of up to 4μm of stroke, have a surface finish of <10nm RMS with a fill factor of 99.8%. The segmented device will have a range of motion of 1um of piston and a 600 arc-seconds of tip/tilt simultaneously and a surface finish of 1nm RMS. The individual mirror elements in this unique device, are designed such that they will maintain their flatness throughout the range of travel. New design features and fabrication processes are combined with a proven device architecture to achieve the desired performance and high reliability. Presented in this paper are device characteristic and performance results of these devices.

  18. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  19. Image classification with densely sampled image windows and generalized adaptive multiple kernel learning.

    PubMed

    Yan, Shengye; Xu, Xinxing; Xu, Dong; Lin, Stephen; Li, Xuelong

    2015-03-01

    We present a framework for image classification that extends beyond the window sampling of fixed spatial pyramids and is supported by a new learning algorithm. Based on the observation that fixed spatial pyramids sample a rather limited subset of the possible image windows, we propose a method that accounts for a comprehensive set of windows densely sampled over location, size, and aspect ratio. A concise high-level image feature is derived to effectively deal with this large set of windows, and this higher level of abstraction offers both efficient handling of the dense samples and reduced sensitivity to misalignment. In addition to dense window sampling, we introduce generalized adaptive l(p)-norm multiple kernel learning (GA-MKL) to learn a robust classifier based on multiple base kernels constructed from the new image features and multiple sets of prelearned classifiers from other classes. With GA-MKL, multiple levels of image features are effectively fused, and information is shared among different classifiers. Extensive evaluation on benchmark datasets for object recognition (Caltech256 and Caltech101) and scene recognition (15Scenes) demonstrate that the proposed method outperforms the state-of-the-art under a broad range of settings. PMID:24968365

  20. Adaptive box filters for removal of random noise from digital images

    USGS Publications Warehouse

    Eliason, E.M.; McEwen, A.S.

    1990-01-01

    We have developed adaptive box-filtering algorithms to (1) remove random bit errors (pixel values with no relation to the image scene) and (2) smooth noisy data (pixels related to the image scene but with an additive or multiplicative component of noise). For both procedures, we use the standard deviation (??) of those pixels within a local box surrounding each pixel, hence they are adaptive filters. This technique effectively reduces speckle in radar images without eliminating fine details. -from Authors

  1. Polymer Solidification and Stabilization: Adaptable Processes for Atypical Wastes

    SciTech Connect

    Jensen, C.

    2007-07-01

    Vinyl Ester Styrene (VES) and Advanced Polymer Solidification (APS{sup TM}) processes are used to solidify, stabilize, and immobilize radioactive, pyrophoric and hazardous wastes at US Department of Energy (DOE) and Department of Defense (DOD) sites, and commercial nuclear facilities. A wide range of projects have been accomplished, including in situ immobilization of ion exchange resin and carbon filter media in decommissioned submarines; underwater solidification of zirconium and hafnium machining swarf; solidification of uranium chips; impregnation of depth filters; immobilization of mercury, lead and other hazardous wastes (including paint chips and blasting media); and in situ solidification of submerged demineralizers. Discussion of the adaptability of the VES and APS{sup TM} processes is timely, given the decommissioning work at government sites, and efforts by commercial nuclear plants to reduce inventories of one-of-a-kind wastes. The VES and APS{sup TM} media and processes are highly adaptable to a wide range of waste forms, including liquids, slurries, bead and granular media; as well as metal fines, particles and larger pieces. With the ability to solidify/stabilize liquid wastes using high-speed mixing; wet sludges and solids by low-speed mixing; or bead and granular materials through in situ processing, these polymer will produce a stable, rock-hard product that has the ability to sequester many hazardous waste components and create Class B and C stabilized waste forms for disposal. Technical assessment and approval of these solidification processes and final waste forms have been greatly simplified by exhaustive waste form testing, as well as multiple NRC and CRCPD waste form approvals. (authors)

  2. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  3. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  4. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  5. Adaptive femtosecond control using feedback from three-dimensional momentum images

    NASA Astrophysics Data System (ADS)

    Wells, E.

    2011-05-01

    Shaping ultrafast laser pulses using adaptive feedback is a proven technique for manipulating dynamics in molecular systems with no readily apparent control mechanism. Commonly employed feedback signals include fluorescence or ion yield, which may not uniquely identify the final state. Raw velocity map images, which contain a two-dimensional representation of the full three-dimensional photofragment momentum vector, are a more specific feedback source. The raw images, however, are limited by an azimuthal ambiguity which is usually removed in offline processing. By implementing a rapid inversion procedure based upon the onion-peeling technique, we are able to incorporate three-dimensional momentum information directly into the adaptive control loop. This method enables more targeted control experiments. Two examples are used to demonstrate the utility of this feedback. First, double ionization of CO produces C+ and O+ fragments ejected both perpendicular and parallel to the laser polarization with kinetic energy release of ~6 eV. Both suppression and enhancement of the perpendicular transitions relative to the parallel transitions are demonstrated. Second, double ionization of acetylene can lead to both HCCH2+ and HHCC2+ isomers. We select between these outcomes using the angular information contained in the CH+ and CH2+images. Supported by National Science Foundation award PHY-0969687 and the Chemical Sciences, Geosciences, and Biosciences Division, Office of Basic Energy Science, Office of Science, US Department of Energy.

  6. Adaptive memory: enhanced location memory after survival processing.

    PubMed

    Nairne, James S; Vanarsdall, Joshua E; Pandeirada, Josefa N S; Blunt, Janell R

    2012-03-01

    Two experiments investigated whether survival processing enhances memory for location. From an adaptive perspective, remembering that food has been located in a particular area, or that potential predators are likely to be found in a given territory, should increase the chances of subsequent survival. Participants were shown pictures of food or animals located at various positions on a computer screen. The task was to rate the ease of collecting the food or capturing the animals relative to a central fixation point. Surprise retention tests revealed that people remembered the locations of the items better when the collection or capturing task was described as relevant to survival. These data extend the generality of survival processing advantages to a new domain (location memory) by means of a task that does not involve rating the relevance of words to a scenario. PMID:22004268

  7. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    NASA Astrophysics Data System (ADS)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  8. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  9. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  10. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  11. Rician noise reduction in magnetic resonance images using adaptive non-local mean and guided image filtering

    NASA Astrophysics Data System (ADS)

    Mahmood, Muhammad Tariq; Chu, Yeon-Ho; Choi, Young-Kyu

    2016-05-01

    This paper proposes a Rician noise reduction method for magnetic resonance (MR) images. The proposed method is based on adaptive non-local mean and guided image filtering techniques. In the first phase, a guidance image is obtained from the noisy image through an adaptive non-local mean filter. Sobel operators are applied to compute the strength of edges which is further used to control the spread of the kernel in non-local mean filtering. In the second phase, the noisy and the guidance images are provided to the guided image filter as input to restore the noise-free image. The improved performance of the proposed method is investigated using the simulated and real data sets of MR images. Its performance is also compared with the previously proposed state-of-the art methods. Comparative analysis demonstrates the superiority of the proposed scheme over the existing approaches.

  12. Rician noise reduction in magnetic resonance images using adaptive non-local mean and guided image filtering

    NASA Astrophysics Data System (ADS)

    Mahmood, Muhammad Tariq; Chu, Yeon-Ho; Choi, Young-Kyu

    2016-06-01

    This paper proposes a Rician noise reduction method for magnetic resonance (MR) images. The proposed method is based on adaptive non-local mean and guided image filtering techniques. In the first phase, a guidance image is obtained from the noisy image through an adaptive non-local mean filter. Sobel operators are applied to compute the strength of edges which is further used to control the spread of the kernel in non-local mean filtering. In the second phase, the noisy and the guidance images are provided to the guided image filter as input to restore the noise-free image. The improved performance of the proposed method is investigated using the simulated and real data sets of MR images. Its performance is also compared with the previously proposed state-of-the art methods. Comparative analysis demonstrates the superiority of the proposed scheme over the existing approaches.

  13. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  14. Future projects in pulse image processing

    NASA Astrophysics Data System (ADS)

    Kinser, Jason M.

    1999-03-01

    Pulse-Couple Neural Networks have generated quite a bit of interest as image processing tools. Past applications include image segmentation, edge extraction, texture extraction, de-noising, object isolation, foveation and fusion. These past applications do not comprise a complete list of useful applications of the PCNN. Future avenues of research will include level set analysis, binary (optical) correlators, artificial life simulations, maze running and filter jet analysis. This presentation will explore these future avenues of PCNN research.

  15. Phase sensitive adaptive optics assisted SLO/OCT for retinal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Pircher, Michael; Felberer, Franz; Salas, Matthias; Haindl, Richard; Baumann, Bernhard; Wartak, Andreas; Hitzenberger, Christoph K.

    2016-03-01

    Adaptive optics (AO) is essential in order to visualize small structures such as cone and rod photoreceptors in the living human retina in vivo. By combining AO with optical coherence tomography (OCT) the axial resolution in the images can be further improved. OCT provides access to the phase of the light returning from the retina which allows a measurement of subtle length changes in the nanometer range. These occur for example during the renewal process of cone outer segments. We present an approach for measuring very small length changes using an extended AO scanning laser ophthalmoscope (SLO)/ OCT instrument. By adding a second OCT interferometer that shares the same sample arm as the first interferometer, phase sensitive measurements can be performed in the en-face imaging plane. Frame averaging decreases phase noise which greatly improves the precision in the measurement of associated length changes.

  16. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  17. Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

    PubMed Central

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  18. CCD architecture for spacecraft SAR image processing

    NASA Technical Reports Server (NTRS)

    Arens, W. E.

    1977-01-01

    A real-time synthetic aperture radar (SAR) image processing architecture amenable to future on-board spacecraft applications is currently under development. Using state-of-the-art charge-coupled device (CCD) technology, low cost and power are inherent features. Other characteristics include the ability to reprogram correlation reference functions, correct for range migration, and compensate for antenna beam pointing errors on the spacecraft in real time. The first spaceborne demonstration is scheduled to be flown as an experiment on a 1982 Shuttle imaging radar mission (SIR-B). This paper describes the architecture and implementation characteristics of this initial spaceborne CCD SAR image processor.

  19. Infrared image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Ibarra-Castanedo, C.; González, D.; Klein, M.; Pilla, M.; Vallerand, S.; Maldague, X.

    2004-12-01

    Infrared thermography in nondestructive testing provides images (thermograms) in which zones of interest (defects) appear sometimes as subtle signatures. In this context, raw images are not often appropriate since most will be missed. In some other cases, what is needed is a quantitative analysis such as for defect detection and characterization. In this paper, presentation is made of various methods of data analysis required either at preprocessing and/or processing images. References from literature are provided for briefly discussed known methods while novelties are elaborated in more details within the text which include also experimental results.

  20. Towards real-time wavefront sensorless adaptive optics using a graphical processing unit (GPU) in a line scanning system

    NASA Astrophysics Data System (ADS)

    Biss, David P.; Patel, Ankit H.; Ferguson, R. Daniel; Mujat, Mircea; Iftimia, Nicusor; Hammer, Daniel X.

    2011-03-01

    Adaptive optics ophthalmic imaging systems that rely on a standalone wave-front sensor can be costly to build and difficult for non-technical personnel to operate. As an alternative we present a simplified wavefront sensorless adaptive optics laser scanning ophthalmoscope. This sensorless system is based on deterministic search algorithms that utilize the image's spatial frequency as an optimization metric. We implement this algorithm on a NVIDIA video card to take advantage of the graphics processing unit (GPU)'s parallel architecture to reduce algorithm computation times and approach real-time correction.

  1. Multiframe adaptive Wiener filter super-resolution with JPEG2000-compressed images

    NASA Astrophysics Data System (ADS)

    Narayanan, Barath Narayanan; Hardie, Russell C.; Balster, Eric J.

    2014-12-01

    Historically, Joint Photographic Experts Group 2000 (JPEG2000) image compression and multiframe super-resolution (SR) image processing techniques have evolved separately. In this paper, we propose and compare novel processing architectures for applying multiframe SR with JPEG2000 compression. We propose a modified adaptive Wiener filter (AWF) SR method and study its performance as JPEG2000 is incorporated in different ways. In particular, we perform compression prior to SR and compare this to compression after SR. We also compare both independent-frame compression and difference-frame compression approaches. We find that some of the SR artifacts that result from compression can be reduced by decreasing the assumed global signal-to-noise ratio (SNR) for the AWF SR method. We also propose a novel spatially adaptive SNR estimate for the AWF designed to compensate for the spatially varying compression artifacts in the input frames. The experimental results include the use of simulated imagery for quantitative analysis. We also include real-video results for subjective analysis.

  2. Adaptive spatial compounding for improving ultrasound images of the epidural space on human subjects

    NASA Astrophysics Data System (ADS)

    Tran, Denis; Hor, King-Wei; Kamani, Allaudin; Lessoway, Vickie; Rohling, Robert N.

    2008-03-01

    Administering epidural anesthesia can be a difficult procedure, especially for inexperienced physicians. The use of ultrasound imaging can help by showing the location of the key surrounding structures: the ligamentum flavum and the lamina of the vertebrae. The anatomical depiction of the interface between ligamentum flavum and epidural space is currently limited by speckle and anisotropic reflection. Previous work on phantoms showed that adaptive spatial compounding with non-rigid registration can improve the depiction of these features. This paper describes the development of an updated compounding algorithm and results from a clinical study. Average-based compounding may obscure anisotropic reflectors that only appear at certain beam angles, so a new median-based compounding technique is developed. In order to reduce the computational cost of the registration process, a linear prediction algorithm is used to reduce the search space for registration. The algorithms are tested on 20 human subjects. Comparisons are made among the reference image plus combinations of different compounding methods, warping and linear prediction. The gradient of the bone surfaces, the Laplacian of the ligamentum flavum, and the SNR and CNR are used to quantitatively assess the visibility of the features in the processed images. The results show a significant improvement in quality when median-based compounding with warping is used to align the set of beam-steered images and combine them. The improvement of the features makes detection of the epidural space easier.

  3. Frequency-shift low-pass filtering and least mean square adaptive filtering for ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Li, Chunyu; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    Ultrasound image quality enhancement is a problem of considerable interest in medical imaging modality and an ongoing challenge to date. This paper investigates a method based on frequency-shift low-pass filtering (FSLF) and least mean square adaptive filtering (LMSAF) for ultrasound image quality enhancement. FSLF is used for processing the ultrasound signal in the frequency domain, while LMSAPF in the time domain. Firstly, FSLF shifts the center frequency of the focused signal to zero. Then the real and imaginary part of the complex data are filtered respectively by finite impulse response (FIR) low-pass filter. Thus the information around the center frequency are retained while the undesired ones, especially background noises are filtered. Secondly, LMSAF multiplies the signals with an automatically adjusted weight vector to further eliminate the noises and artifacts. Through the combination of the two filters, the ultrasound image is expected to have less noises and artifacts and higher resolution, and contrast. The proposed method was verified with the RF data of the CIRS phantom 055A captured by SonixTouch DAQ system. Experimental results show that the background noises and artifacts can be efficiently restrained, the wire object has a higher resolution and the contrast ratio (CR) can be enhanced for about 12dB to 15dB at different image depth comparing to delay-and-sum (DAS).

  4. Quality evaluation of adaptive optical image based on DCT and Rényi entropy

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Li, Junwei; Wang, Jing; Deng, Rong; Dong, Yanbing

    2015-04-01

    The adaptive optical telescopes play a more and more important role in the detection system on the ground, and the adaptive optical images are so many that we need find a suitable method of quality evaluation to choose good quality images automatically in order to save human power. It is well known that the adaptive optical images are no-reference images. In this paper, a new logarithmic evaluation method based on the use of the discrete cosine transform(DCT) and Rényi entropy for the adaptive optical images is proposed. Through the DCT using one or two dimension window, the statistical property of Rényi entropy for images is studied. The different directional Rényi entropy maps of an input image containing different information content are obtained. The mean values of different directional Rényi entropy maps are calculated. For image quality evaluation, the different directional Rényi entropy and its standard deviation corresponding to region of interest is selected as an indicator for the anisotropy of the images. The standard deviation of different directional Rényi entropy is obtained as the quality evaluation value for adaptive optical image. Experimental results show the proposed method that the sorting quality matches well with the visual inspection.

  5. Industrial Holography Combined With Image Processing

    NASA Astrophysics Data System (ADS)

    Schorner, J.; Rottenkolber, H.; Roid, W.; Hinsch, K.

    1988-01-01

    Holographic test methods have gained to become a valuable tool for the engineer in research and development. But also in the field of non-destructive quality control holographic test equipment is now accepted for tests within the production line. The producer of aircraft tyres e. g. are using holographic tests to prove the guarantee of their tyres. Together with image processing the whole test cycle is automatisized. The defects within the tyre are found automatically and are listed on an outprint. The power engine industry is using holographic vibration tests for the optimization of their constructions. In the plastics industry tanks, wheels, seats and fans are tested holographically to find the optimum of shape. The automotive industry makes holography a tool for noise reduction. Instant holography and image processing techniques for quantitative analysis have led to an economic application of holographic test methods. New developments of holographic units in combination with image processing are presented.

  6. DSP based image processing for retinal prosthesis.

    PubMed

    Parikh, Neha J; Weiland, James D; Humayun, Mark S; Shah, Saloni S; Mohile, Gaurav S

    2004-01-01

    The real-time image processing in retinal prosthesis consists of the implementation of various image processing algorithms like edge detection, edge enhancement, decimation etc. The algorithmic computations in real-time may have high level of computational complexity and hence the use of digital signal processors (DSPs) for the implementation of such algorithms is proposed here. This application desires that the DSPs be highly computationally efficient while working on low power. DSPs have computational capabilities of hundreds of millions of instructions per second (MIPS) or millions of floating point operations per second (MFLOPS) along with certain processor configurations having low power. The various image processing algorithms, the DSP requirements and capabilities of different platforms would be discussed in this paper. PMID:17271974

  7. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  8. Interaction of image noise, spatial resolution, and low contrast fine detail preservation in digital image processing

    NASA Astrophysics Data System (ADS)

    Artmann, Uwe; Wueller, Dietmar

    2009-01-01

    We present a method to improve the validity of noise and resolution measurements on digital cameras. If non-linear adaptive noise reduction is part of the signal processing in the camera, the measurement results for image noise and spatial resolution can be good, while the image quality is low due to the loss of fine details and a watercolor like appearance of the image. To improve the correlation between objective measurement and subjective image quality we propose to supplement the standard test methods with an additional measurement of the texture preserving capabilities of the camera. The proposed method uses a test target showing white Gaussian noise. The camera under test reproduces this target and the image is analyzed. We propose to use the kurtosis of the derivative of the image as a metric for the texture preservation of the camera. Kurtosis is a statistical measure for the closeness of a distribution compared to the Gaussian distribution. It can be shown, that the distribution of digital values in the derivative of the image showing the chart becomes the more leptokurtic (increased kurtosis) the stronger the noise reduction has an impact on the image.

  9. Digital image database processing to simulate image formation in ideal lighting conditions of the human eye

    NASA Astrophysics Data System (ADS)

    Castañeda-Santos, Jessica; Santiago-Alvarado, Agustin; Cruz-Félix, Angel S.; Hernández-Méndez, Arturo

    2015-09-01

    The pupil size of the human eye has a large effect in the image quality due to inherent aberrations. Several studies have been performed to calculate its size relative to the luminance as well as considering other factors, i.e., age, size of the adapting field and mono and binocular vision. Moreover, ideal lighting conditions are known, but software suited to our specific requirements, low cost and low computational consumption, in order to simulate radiation adaptation and image formation in the retina with ideal lighting conditions has not yet been developed. In this work, a database is created consisting of 70 photographs corresponding to the same scene with a fixed target at different times of the day. By using this database, characteristics of the photographs are obtained by measuring the luminance average initial threshold value of each photograph by means of an image histogram. Also, we present the implementation of a digital filter for both, image processing on the threshold values of our database and generating output images with the threshold values reported for the human eye in ideal cases. Some potential applications for this kind of filters may be used in artificial vision systems.

  10. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  11. Spaceborne multiview image compression based on adaptive disparity compensation with rate-distortion optimization

    NASA Astrophysics Data System (ADS)

    Li, Shigao; Su, Kehua; Jia, Liming

    2016-01-01

    Disparity compensation (DC) and transform coding are incorporated into a hybrid coding to reduce the code-rate of multiview images. However, occlusion and inaccurate disparity estimations (DE) impair the performance of DC, especially in spaceborne images. This paper proposes an adaptive disparity-compensation scheme for the compression of spaceborne multiview images, including stereo image pairs and three-line-scanner images. DC with adaptive loop filter is used to remove redundancy between reference images and target images and a wavelet-based coding method is used to encode reference images and residue images. In occlusion regions, the DC efficiency may be poor because no interview correlation exists. A rate-distortion optimization method is thus designed to select the best prediction mode for local regions. Experimental results show that the proposed scheme can provide significant coding gain compared with some other similar coding schemes, and the time complexity is also competitive.

  12. Altered Visual Adaptation to Body Shape in Eating Disorders: Implications for Body Image Distortion.

    PubMed

    Mohr, Harald M; Rickmeyer, Constanze; Hummel, Dennis; Ernst, Mareike; Grabhorn, Ralph

    2016-07-01

    Previous research has shown that after adapting to a thin body, healthy participants (HP) perceive pictures of their own bodies as being fatter and vice versa. This aftereffect might contribute to the development of perceptual body image disturbances in eating disorders (ED).In the present study, HP and ED completed a behavioral experiment to rate manipulated pictures of their own bodies after adaptation to thin or fat body pictures. After adapting to a thin body, HP judged a thinner than actual body picture to be the most realistic and vice versa, resembling a typical aftereffect. ED only showed such an adaptation effect when they adapted to fat body pictures.The reported results indicate a relationship between body image distortion in ED and visual body image adaptation. It can be suspected that due to a pre-existing, long-lasting adaptation to thin body shapes in ED, an additional visual adaption to thin body shapes cannot be induced. Hence this pre-existing adaptation to thin body shapes could induce perceptual body image distortions in ED. PMID:26921409

  13. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  14. Results of precision processing (scene correction) of ERTS-1 images using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Bernstein, R.

    1973-01-01

    ERTS-1 MSS and RBV data recorded on computer compatible tapes have been analyzed and processed, and preliminary results have been obtained. No degradation of intensity (radiance) information occurred in implementing the geometric correction. The quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum. Processing times of digitally processed images are about equivalent to the NDPF electro-optical processor.

  15. Adapting the transtheoretical model of change to the bereavement process.

    PubMed

    Calderwood, Kimberly A

    2011-04-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals experience. This theory is unique because it addresses attitudes, intentions, and behavioral processes at each stage; it allows for a focus on a broader range of emotions than just anger and depression; it allows for the recognition of two periods of regression during the bereavement process; and it adds a maintenance stage, which other theories lack. This theory can benefit bereaved individuals directly and through the increased awareness among counselors, family, friends, employers, and society at large. This theory may also be used as a tool for bereavement programs to consider whether they are meeting clients' needs throughout the transformation change bereavement process rather than only focusing on the initial stages characterized by intense emotion. PMID:21553574

  16. FLIPS: Friendly Lisp Image Processing System

    NASA Astrophysics Data System (ADS)

    Gee, Shirley J.

    1991-08-01

    The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.

  17. PYNPOINT: an image processing package for finding exoplanets

    NASA Astrophysics Data System (ADS)

    Amara, Adam; Quanz, Sascha P.

    2012-12-01

    We present the scientific performance results of PYNPOINT, our Python-based software package that uses principal component analysis to detect and estimate the flux of exoplanets in two-dimensional imaging data. Recent advances in adaptive optics and imaging technology at visible and infrared wavelengths have opened the door to direct detections of planetary companions to nearby stars, but image processing techniques have yet to be optimized. We show that the performance of our approach gives a marked improvement over what is presently possible using existing methods such as LOCI. To test our approach, we use real angular differential imaging (ADI) data taken with the adaptive optics-assisted high resolution near-infrared camera NACO at the VLT. These data were taken during the commissioning of the apodizing phase plate (APP) coronagraph. By inserting simulated planets into these data, we test the performance of our method as a function of planet brightness for different positions on the image. We find that in all cases PYNPOINT has a detection threshold that is superior to that given by our LOCI analysis when assessed in a common statistical framework. We obtain our best improvements for smaller inner working angles (IWAs). For an IWA of ˜0.29 arcsec we find that we achieve a detection sensitivity that is a factor of 5 better than LOCI. We also investigate our ability to correctly measure the flux of planets. Again, we find improvements over LOCI, with PYNPOINT giving more stable results. Finally, we apply our package to a non-APP data set of the exoplanet β Pictoris b and reveal the planet with high signal-to-noise. This confirms that PYNPOINT can potentially be applied with high fidelity to a wide range of high-contrast imaging data sets.

  18. Product review: lucis image processing software.

    PubMed

    Johnson, J E

    1999-04-01

    Lucis is a software program that allows the manipulation of images through the process of selective contrast pattern emphasis. Using an image-processing algorithm called Differential Hysteresis Processing (DHP), Lucis extracts and highlights patterns based on variations in image intensity (luminance). The result is that details can be seen that would otherwise be hidden in deep shadow or excessive brightness. The software is contained on a single floppy disk, is easy to install on a PC, simple to use, and runs on Windows 95, Windows 98, and Windows NT operating systems. The cost is $8,500 for a license, but is estimated to save a great deal of money in photographic materials, time, and labor that would have otherwise been spent in the darkroom. Superb images are easily obtained from unstained (no lead or uranium) sections, and stored image files sent to laser printers are of publication quality. The software can be used not only for all types of microscopy, including color fluorescence light microscopy, biological and materials science electron microscopy (TEM and SEM), but will be beneficial in medicine, such as X-ray films (pending approval by the FDA), and in the arts. PMID:10206154

  19. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  20. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  1. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  2. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system. PMID:23594233

  3. FITSH: Software Package for Image Processing

    NASA Astrophysics Data System (ADS)

    Pál, András

    2011-11-01

    FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.

  4. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms. PMID:23286072

  5. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  6. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-08-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  7. Cytopathology whole slide images and adaptive tutorials for postgraduate pathology trainees: a randomized crossover trial.

    PubMed

    Van Es, Simone L; Kumar, Rakesh K; Pryor, Wendy M; Salisbury, Elizabeth L; Velan, Gary M

    2015-09-01

    To determine whether cytopathology whole slide images and virtual microscopy adaptive tutorials aid learning by postgraduate trainees, we designed a randomized crossover trial to evaluate the quantitative and qualitative impact of whole slide images and virtual microscopy adaptive tutorials compared with traditional glass slide and textbook methods of learning cytopathology. Forty-three anatomical pathology registrars were recruited from Australia, New Zealand, and Malaysia. Online assessments were used to determine efficacy, whereas user experience and perceptions of efficiency were evaluated using online Likert scales and open-ended questions. Outcomes of online assessments indicated that, with respect to performance, learning with whole slide images and virtual microscopy adaptive tutorials was equivalent to using traditional methods. High-impact learning, efficiency, and equity of learning from virtual microscopy adaptive tutorials were strong themes identified in open-ended responses. Participants raised concern about the lack of z-axis capability in the cytopathology whole slide images, suggesting that delivery of z-stacked whole slide images online may be important for future educational development. In this trial, learning cytopathology with whole slide images and virtual microscopy adaptive tutorials was found to be as effective as and perceived as more efficient than learning from glass slides and textbooks. The use of whole slide images and virtual microscopy adaptive tutorials has the potential to provide equitable access to effective learning from teaching material of consistently high quality. It also has broader implications for continuing professional development and maintenance of competence and quality assurance in specialist practice. PMID:26093936

  8. Mariner 9 - Image processing and products.

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Green, W. B.; Cutts, J. A.; Jahelka, E. D.; Johansen, R. A.; Sander, M. J.; Seidman, J. B.; Young, A. T.; Soderblom, L. A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the different levels of decalibration and analysis.

  9. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  10. The adaptive-loop-gain adaptive-scale CLEAN deconvolution of radio interferometric images

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Zhang, M.; Liu, X.

    2016-05-01

    CLEAN algorithms are a class of deconvolution solvers which are widely used to remove the effect of the telescope Point Spread Function (PSF). Loop gain is one important parameter in CLEAN algorithms. Currently the parameter is fixed during deconvolution, which restricts the performance of CLEAN algorithms. In this paper, we propose a new deconvolution algorithm with an adaptive loop gain scheme, which is referred to as the adaptive-loop-gain adaptive-scale CLEAN (Algas-Clean) algorithm. The test results show that the new algorithm can give a more accurate model with faster convergence.

  11. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  12. ADAPT: A knowledge-based synthesis tool for digital signal processing system design

    SciTech Connect

    Cooley, E.S.

    1988-01-01

    A computer aided synthesis tool for expansion, compression, and filtration of digital images is described. ADAPT, the Autonomous Digital Array Programming Tool, uses an extensive design knowledge base to synthesize a digital signal processing (DSP) system. Input to ADAPT can be either a behavioral description in English, or a block level specification via Petri Nets. The output from ADAPT comprises code to implement the DSP system on an array of processors. ADAPT is constructed using C, Prolog, and X Windows on a SUN 3/280 workstation. ADAPT knowledge encompasses DSP component information and the design algorithms and heuristics of a competent DSP designer. The knowledge is used to form queries for design capture, to generate design constraints from the user's responses, and to examine the design constraints. These constraints direct the search for possible DSP components and target architectures. Constraints are also used for partitioning the target systems into less complex subsystems. The subsystems correspond to architectural building blocks of the DSP design. These subsystems inherit design constraints and DSP characteristics from their parent blocks. Thus, a DSP subsystem or parent block, as designed by ADAPT, must meet the user's design constraints. Design solutions are sought by searching the Components section of the design knowledge base. Component behavior which matches or is similar to that required by the DSP subsystems is sought. Each match, which corresponds to a design alternative, is evaluated in terms of its behavior. When a design is sufficiently close to the behavior required by the user, detailed mathematical simulations may be performed to accurately determine exact behavior.

  13. Adaptive Wavefront Calibration and Control for the Gemini Planet Imager

    SciTech Connect

    Poyneer, L A; Veran, J

    2007-02-02

    Quasi-static errors in the science leg and internal AO flexure will be corrected. Wavefront control will adapt to current atmospheric conditions through Fourier modal gain optimization, or the prediction of atmospheric layers with Kalman filtering.

  14. Progressive band processing for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Schultz, Robert C.

    Hyperspectral imaging has emerged as an image processing technique in many applications. The reason that hyperspectral data is called hyperspectral is mainly because the massive amount of information provided by the hundreds of spectral bands that can be used for data analysis. However, due to very high band-to-band correlation much information may be also redundant. Consequently, how to effectively and best utilize such rich spectral information becomes very challenging. One general approach is data dimensionality reduction which can be performed by data compression techniques, such as data transforms, and data reduction techniques, such as band selection. This dissertation presents a new area in hyperspectral imaging, to be called progressive hyperspectral imaging, which has not been explored in the past. Specifically, it derives a new theory, called Progressive Band Processing (PBP) of hyperspectral data that can significantly reduce computing time and can also be realized in real-time. It is particularly suited for application areas such as hyperspectral data communications and transmission where data can be communicated and transmitted progressively through spectral or satellite channels with limited data storage. Most importantly, PBP allows users to screen preliminary results before deciding to continue with processing the complete data set. These advantages benefit users of hyperspectral data by reducing processing time and increasing the timeliness of crucial decisions made based on the data such as identifying key intelligence information when a required response time is short.

  15. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  16. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  17. Image processing techniques for noise removal, enhancement and segmentation of cartilage OCT images

    NASA Astrophysics Data System (ADS)

    Rogowska, Jadwiga; Brezinski, Mark E.

    2002-02-01

    Osteoarthritis, whose hallmark is the progressive loss of joint cartilage, is a major cause of morbidity worldwide. Recently, optical coherence tomography (OCT) has demonstrated considerable promise for the assessment of articular cartilage. Among the most important parameters to be assessed is cartilage width. However, detection of the bone cartilage interface is critical for the assessment of cartilage width. At present, the quantitative evaluations of cartilage thickness are being done using manual tracing of cartilage-bone borders. Since data is being obtained near video rate with OCT, automated identification of the bone-cartilage interface is critical. In order to automate the process of boundary detection on OCT images, there is a need for developing new image processing techniques. In this paper we describe the image processing techniques for speckle removal, image enhancement and segmentation of cartilage OCT images. In particular, this paper focuses on rabbit cartilage since this is an important animal model for testing both chondroprotective agents and cartilage repair techniques. In this study, a variety of techniques were examined. Ultimately, by combining an adaptive filtering technique with edge detection (vertical gradient, Sobel edge detection), cartilage edges can be detected. The procedure requires several steps and can be automated. Once the cartilage edges are outlined, the cartilage thickness can be measured.

  18. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  19. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  20. AVES-IMCO: an adaptive optics visible spectrograph and imager/coronograph for NAOS

    NASA Astrophysics Data System (ADS)

    Beuzit, Jean-Luc; Lagrange, A.-M.; Mouillet, D.; Chauvin, G.; Stadler, E.; Charton, J.; Lacombe, F.; AVES-IMCO Team

    2001-05-01

    The NAOS adaptive optics system will very soon provide diffraction-limited images on the VLT, down to the visible wavelengths (0.020 arcseconds at 0.83 micron for instance). At the moment, the only instrument dedicated to NAOS is the CONICA spectro-imager, operating in the near-infrared from 1 to 5 microns. We are now proposing to ESO, in collaboration with an Italian group, the development of a visible spectrograph/imager/coronograph, AVES-IMCO (Adaptive Optics Visual Echelle Spectrograph and IMager/COronograph). We present here the general concept of the new instrument as well as its expected performances in the different modes.