Science.gov

Sample records for adaptive image processing

  1. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  2. An adaptive image segmentation process for the classification of lung biopsy images

    NASA Astrophysics Data System (ADS)

    McKee, Daniel W.; Land, Walker H., Jr.; Zhukov, Tatyana; Song, Dansheng; Qian, Wei

    2006-03-01

    The purpose of this study was to develop a computer-based second opinion diagnostic tool that could read microscope images of lung tissue and classify the tissue sample as normal or cancerous. This problem can be broken down into three areas: segmentation, feature extraction and measurement, and classification. We introduce a kernel-based extension of fuzzy c-means to provide a coarse initial segmentation, with heuristically-based mechanisms to improve the accuracy of the segmentation. The segmented image is then processed to extract and quantify features. Finally, the measured features are used by a Support Vector Machine (SVM) to classify the tissue sample. The performance of this approach was tested using a database of 85 images collected at the Moffitt Cancer Center and Research Institute. These images represent a wide variety of normal lung tissue samples, as well as multiple types of lung cancer. When used with a subset of the data containing images from the normal and adenocarcinoma classes, we were able to correctly classify 78% of the images, with a ROC A Z of 0.758.

  3. Adapted waveform analysis, wavelet packets, and local cosine libraries as a tool for image processing

    NASA Astrophysics Data System (ADS)

    Coifman, Ronald R.; Woog, Lionel J.

    1995-09-01

    Adapted wave form analysis, refers to a collection of FFT like adapted transform algorithms. Given an image these methods provide special matched collections of templates (orthonormal bases) enabling an efficient coding of the image. Perhaps the closest well known example of such coding method is provided by musical notation, where each segment of music is represented by a musical score made up of notes (templates) characterised by their duration, pitch, location and amplitude, our method corresponds to transcribing the music in as few notes as possible. The extension to images and video is straightforward we describe the image by collections of oscillatory patterns (paint brush strokes)of various sizes locations and amplitudes using a variety of orthogonal bases. These selected basis functions are chosen inside predefined libraries of oscillatory localized functions (trigonometric and wavelet-packets waveforms) so as to optimize the number of parameters needed to describe our object. These algorithms are of complexity N log N opening the door for a large range of applications in signal and image processing, such as compression, feature extraction denoising and enhancement. In particular we describe a class of special purpose compressions for fingerprint irnages, as well as denoising tools for texture and noise extraction. We start by relating traditional Fourier methods to wavelet, wavelet-packet based algorithms using a recent refinement of the windowed sine and cosine transforms. We will then derive an adapted local sine transform show it's relation to wavelet and wavelet-packet analysis and describe an analysis toolkit illustrating the merits of different adaptive and nonadaptive schemes.

  4. An adaptive threshold based image processing technique for improved glaucoma detection and classification.

    PubMed

    Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore

    2015-11-01

    Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant.

  5. Automatic ultrasonic imaging system with adaptive-learning-network signal-processing techniques

    SciTech Connect

    O'Brien, L.J.; Aravanis, N.A.; Gouge, J.R. Jr.; Mucciardi, A.N.; Lemon, D.K.; Skorpik, J.R.

    1982-04-01

    A conventional pulse-echo imaging system has been modified to operate with a linear ultrasonic array and associated digital electronics to collect data from a series of defects fabricated in aircraft quality steel blocks. A thorough analysis of the defect responses recorded with this modified system has shown that considerable improvements over conventional imaging approaches can be obtained in the crucial areas of defect detection and characterization. A combination of advanced signal processing concepts with the Adaptive Learning Network (ALN) methodology forms the basis for these improvements. Use of established signal processing algorithms such as temporal and spatial beam-forming in concert with a sophisticated detector has provided a reliable defect detection scheme which can be implemented in a microprocessor-based system to operate in an automatic mode.

  6. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  7. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    SciTech Connect

    Druckmueller, M.

    2013-08-15

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  8. Analysis of adaptive forward-backward diffusion flows with applications in image processing

    NASA Astrophysics Data System (ADS)

    Surya Prasath, V. B.; Urbano, José Miguel; Vorotnikov, Dmitry

    2015-10-01

    The nonlinear diffusion model introduced by Perona and Malik (1990 IEEE Trans. Pattern Anal. Mach. Intell. 12 629-39) is well suited to preserve salient edges while restoring noisy images. This model overcomes well-known edge smearing effects of the heat equation by using a gradient dependent diffusion function. Despite providing better denoizing results, the analysis of the PM scheme is difficult due to the forward-backward nature of the diffusion flow. We study a related adaptive forward-backward diffusion equation which uses a mollified inverse gradient term engrafted in the diffusion term of a general nonlinear parabolic equation. We prove a series of existence, uniqueness and regularity results for viscosity, weak and dissipative solutions for such forward-backward diffusion flows. In particular, we introduce a novel functional framework for wellposedness of flows of total variation type. A set of synthetic and real image processing examples are used to illustrate the properties and advantages of the proposed adaptive forward-backward diffusion flows.

  9. Adaptive passive fathometer processing.

    PubMed

    Siderius, Martin; Song, Heechun; Gerstoft, Peter; Hodgkiss, William S; Hursky, Paul; Harrison, Chris

    2010-04-01

    Recently, a technique has been developed to image seabed layers using the ocean ambient noise field as the sound source. This so called passive fathometer technique exploits the naturally occurring acoustic sounds generated on the sea-surface, primarily from breaking waves. The method is based on the cross-correlation of noise from the ocean surface with its echo from the seabed, which recovers travel times to significant seabed reflectors. To limit averaging time and make this practical, beamforming is used with a vertical array of hydrophones to reduce interference from horizontally propagating noise. The initial development used conventional beamforming, but significant improvements have been realized using adaptive techniques. In this paper, adaptive methods for this process are described and applied to several data sets to demonstrate improvements possible as compared to conventional processing.

  10. Adaptation Duration Dissociates Category-, Image-, and Person-Specific Processes on Face-Evoked Event-Related Potentials.

    PubMed

    Zimmer, Márta; Zbanţ, Adriana; Németh, Kornél; Kovács, Gyula

    2015-01-01

    Several studies demonstrated that face perception is biased by the prior presentation of another face, a phenomenon termed as face-related after-effect (FAE). FAE is linked to a neural signal-reduction at occipito-temporal areas and it can be observed in the amplitude modulation of the early event-related potential (ERP) components. Recently, macaque single-cell recording studies suggested that manipulating the duration of the adaptor makes the selective adaptation of different visual motion processing steps possible. To date, however, only a few studies tested the effects of adaptor duration on the electrophysiological correlates of human face processing directly. The goal of the current study was to test the effect of adaptor duration on the image-, identity-, and generic category-specific face processing steps. To this end, in a two-alternative forced choice familiarity decision task we used five adaptor durations (ranging from 200-5000 ms) and four adaptor categories: adaptor and test were identical images-Repetition Suppression (RS); adaptor and test were different images of the Same Identity (SameID); adaptor and test images depicted Different Identities (DiffID); the adaptor was a Fourier phase-randomized image (No). Behaviorally, a strong priming effect was observed in both accuracy and response times for RS compared with both DiffID and No. The electrophysiological results suggest that rapid adaptation leads to a category-specific modulation of P100, N170, and N250. In addition, both identity and image-specific processes affected the N250 component during rapid adaptation. On the other hand, prolonged (5000 ms) adaptation enhanced, and extended category-specific adaptation processes over all tested ERP components. Additionally, prolonged adaptation led to the emergence of image-, and identity-specific modulations on the N170 and P2 components as well. In other words, there was a clear dissociation among category, identity-, and image-specific processing

  11. Adaptive Image Processing Methods for Improving Contaminant Detection Accuracy on Poultry Carcasses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Abstract A real-time multispectral imaging system has demonstrated a science-based tool for fecal and ingesta contaminant detection during poultry processing. In order to implement this imaging system at commercial poultry processing industry, the false positives must be removed. For doi...

  12. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.

  13. AIDA: Adaptive Image Deconvolution Algorithm

    NASA Astrophysics Data System (ADS)

    Hom, Erik; Haase, Sebastian; Marchis, Franck

    2013-10-01

    AIDA is an implementation and extension of the MISTRAL myopic deconvolution method developed by Mugnier et al. (2004) (see J. Opt. Soc. Am. A 21:1841-1854). The MISTRAL approach has been shown to yield object reconstructions with excellent edge preservation and photometric precision when used to process astronomical images. AIDA improves upon the original MISTRAL implementation. AIDA, written in Python, can deconvolve multiple frame data and three-dimensional image stacks encountered in adaptive optics and light microscopic imaging.

  14. Adaptation Duration Dissociates Category-, Image-, and Person-Specific Processes on Face-Evoked Event-Related Potentials

    PubMed Central

    Zimmer, Márta; Zbanţ, Adriana; Németh, Kornél; Kovács, Gyula

    2015-01-01

    Several studies demonstrated that face perception is biased by the prior presentation of another face, a phenomenon termed as face-related after-effect (FAE). FAE is linked to a neural signal-reduction at occipito-temporal areas and it can be observed in the amplitude modulation of the early event-related potential (ERP) components. Recently, macaque single-cell recording studies suggested that manipulating the duration of the adaptor makes the selective adaptation of different visual motion processing steps possible. To date, however, only a few studies tested the effects of adaptor duration on the electrophysiological correlates of human face processing directly. The goal of the current study was to test the effect of adaptor duration on the image-, identity-, and generic category-specific face processing steps. To this end, in a two-alternative forced choice familiarity decision task we used five adaptor durations (ranging from 200–5000 ms) and four adaptor categories: adaptor and test were identical images—Repetition Suppression (RS); adaptor and test were different images of the Same Identity (SameID); adaptor and test images depicted Different Identities (DiffID); the adaptor was a Fourier phase-randomized image (No). Behaviorally, a strong priming effect was observed in both accuracy and response times for RS compared with both DiffID and No. The electrophysiological results suggest that rapid adaptation leads to a category-specific modulation of P100, N170, and N250. In addition, both identity and image-specific processes affected the N250 component during rapid adaptation. On the other hand, prolonged (5000 ms) adaptation enhanced, and extended category-specific adaptation processes over all tested ERP components. Additionally, prolonged adaptation led to the emergence of image-, and identity-specific modulations on the N170 and P2 components as well. In other words, there was a clear dissociation among category, identity-, and image

  15. Adaptive optics microscopy enhances image quality in deep layers of CLARITY processed brains of YFP-H mice

    NASA Astrophysics Data System (ADS)

    Reinig, Marc R.; Novack, Samuel W.; Tao, Xiaodong; Ermini, Florian; Bentolila, Laurent A.; Roberts, Dustin G.; MacKenzie-Graham, Allan; Godshalk, S. E.; Raven, M. A.; Kubby, Joel

    2016-03-01

    Optical sectioning of biological tissues has become the method of choice for three-dimensional histological analyses. This is particularly important in the brain were neurons can extend processes over large distances and often whole brain tracing of neuronal processes is desirable. To allow deeper optical penetration, which in fixed tissue is limited by scattering and refractive index mismatching, tissue-clearing procedures such as CLARITY have been developed. CLARITY processed brains have a nearly uniform refractive index and three-dimensional reconstructions at cellular resolution have been published. However, when imaging in deep layers at submicron resolution some limitations caused by residual refractive index mismatching become apparent, as the resulting wavefront aberrations distort the microscopic image. The wavefront can be corrected with adaptive optics. Here, we investigate the wavefront aberrations at different depths in CLARITY processed mouse brains and demonstrate the potential of adaptive optics to enable higher resolution and a better signal-to-noise ratio. Our adaptive optics system achieves high-speed measurement and correction of the wavefront with an open-loop control using a wave front sensor and a deformable mirror. Using adaptive optics enhanced microscopy, we demonstrate improved image quality wavefront, point spread function, and signal to noise in the cortex of YFP-H mice.

  16. Adaptive Sensor Optimization and Cognitive Image Processing Using Autonomous Optical Neuroprocessors

    SciTech Connect

    CAMERON, STEWART M.

    2001-10-01

    Measurement and signal intelligence demands has created new requirements for information management and interoperability as they affect surveillance and situational awareness. Integration of on-board autonomous learning and adaptive control structures within a remote sensing platform architecture would substantially improve the utility of intelligence collection by facilitating real-time optimization of measurement parameters for variable field conditions. A problem faced by conventional digital implementations of intelligent systems is the conflict between a distributed parallel structure on a sequential serial interface functionally degrading bandwidth and response time. In contrast, optically designed networks exhibit the massive parallelism and interconnect density needed to perform complex cognitive functions within a dynamic asynchronous environment. Recently, all-optical self-organizing neural networks exhibiting emergent collective behavior which mimic perception, recognition, association, and contemplative learning have been realized using photorefractive holography in combination with sensory systems for feature maps, threshold decomposition, image enhancement, and nonlinear matched filters. Such hybrid information processors depart from the classical computational paradigm based on analytic rules-based algorithms and instead utilize unsupervised generalization and perceptron-like exploratory or improvisational behaviors to evolve toward optimized solutions. These systems are robust to instrumental systematics or corrupting noise and can enrich knowledge structures by allowing competition between multiple hypotheses. This property enables them to rapidly adapt or self-compensate for dynamic or imprecise conditions which would be unstable using conventional linear control models. By incorporating an intelligent optical neuroprocessor in the back plane of an imaging sensor, a broad class of high-level cognitive image analysis problems including geometric

  17. Using adaptive genetic algorithms in the design of morphological filters in textural image processing

    NASA Astrophysics Data System (ADS)

    Li, Wei; Haese-Coat, Veronique; Ronsin, Joseph

    1996-03-01

    An adaptive GA scheme is adopted for the optimal morphological filter design problem. The adaptive crossover and mutation rate which make the GA avoid premature and at the same time assure convergence of the program are successfully used in optimal morphological filter design procedure. In the string coding step, each string (chromosome) is composed of a structuring element coding chain concatenated with a filter sequence coding chain. In decoding step, each string is divided into 3 chains which then are decoded respectively into one structuring element with a size inferior to 5 by 5 and two concatenating morphological filter operators. The fitness function in GA is based on the mean-square-error (MSE) criterion. In string selection step, a stochastic tournament procedure is used to replace the simple roulette wheel program in order to accelerate the convergence. The final convergence of our algorithm is reached by a two step converging strategy. In presented applications of noise removal from texture images, it is found that with the optimized morphological filter sequences, the obtained MSE values are smaller than those using corresponding non-adaptive morphological filters, and the optimized shapes and orientations of structuring elements take approximately the same shapes and orientations as those of the image textons.

  18. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  19. Passive adaptive imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Tofsted, David

    2016-05-01

    Standard methods for improved imaging system performance under degrading optical turbulence conditions typically involve active adaptive techniques or post-capture image processing. Here, passive adaptive methods are considered where active sources are disallowed, a priori. Theoretical analyses of short-exposure turbulence impacts indicate that varying aperture sizes experience different degrees of turbulence impacts. Smaller apertures often outperform larger aperture systems as turbulence strength increases. This suggests a controllable aperture system is advantageous. In addition, sub-aperture sampling of a set of training images permits the system to sense tilts in different sub-aperture regions through image acquisition and image cross-correlation calculations. A four sub-aperture pattern supports corrections involving five realizable operating modes (beyond tip and tilt) for removing aberrations over an annular pattern. Progress to date will be discussed regarding development and field trials of a prototype system.

  20. Real-time atmospheric imaging and processing with hybrid adaptive optics and hardware accelerated lucky-region fusion (LRF) algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jony Jiang; Carhart, Gary W.; Beresnev, Leonid A.; Aubailly, Mathieu; Jackson, Christopher R.; Ejzak, Garrett; Kiamilev, Fouad E.

    2014-09-01

    Atmospheric turbulences can significantly deteriorate the performance of long-range conventional imaging systems and create difficulties for target identification and recognition. Our in-house developed adaptive optics (AO) system, which contains high-performance deformable mirrors (DMs) and the fast stochastic parallel gradient decent (SPGD) control mechanism, allows effective compensation of such turbulence-induced wavefront aberrations and result in significant improvement on the image quality. In addition, we developed advanced digital synthetic imaging and processing technique, "lucky-region" fusion (LRF), to mitigate the image degradation over large field-of-view (FOV). The LRF algorithm extracts sharp regions from each image obtained from a series of short exposure frames and fuses them into a final improved image. We further implemented such algorithm into a VIRTEX-7 field programmable gate array (FPGA) and achieved real-time video processing. Experiments were performed by combining both AO and hardware implemented LRF processing technique over a near-horizontal 2.3km atmospheric propagation path. Our approach can also generate a universal real-time imaging and processing system with a general camera link input, a user controller interface, and a DVI video output.

  1. NASA End-to-End Data System /NEEDS/ information adaptive system - Performing image processing onboard the spacecraft

    NASA Technical Reports Server (NTRS)

    Kelly, W. L.; Howle, W. M.; Meredith, B. D.

    1980-01-01

    The Information Adaptive System (IAS) is an element of the NASA End-to-End Data System (NEEDS) Phase II and is focused toward onbaord image processing. Since the IAS is a data preprocessing system which is closely coupled to the sensor system, it serves as a first step in providing a 'Smart' imaging sensor. Some of the functions planned for the IAS include sensor response nonuniformity correction, geometric correction, data set selection, data formatting, packetization, and adaptive system control. The inclusion of these sensor data preprocessing functions onboard the spacecraft will significantly improve the extraction of information from the sensor data in a timely and cost effective manner and provide the opportunity to design sensor systems which can be reconfigured in near real time for optimum performance. The purpose of this paper is to present the preliminary design of the IAS and the plans for its development.

  2. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  3. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  4. Seam tracking with adaptive image capture for fine-tuning of a high power laser welding process

    NASA Astrophysics Data System (ADS)

    Lahdenoja, Olli; Säntti, Tero; Laiho, Mika; Paasio, Ari; Poikonen, Jonne K.

    2015-02-01

    This paper presents the development of methods for real-time fine-tuning of a high power laser welding process of thick steel by using a compact smart camera system. When performing welding in butt-joint configuration, the laser beam's location needs to be adjusted exactly according to the seam line in order to allow the injected energy to be absorbed uniformly into both steel sheets. In this paper, on-line extraction of seam parameters is targeted by taking advantage of a combination of dynamic image intensity compression, image segmentation with a focal-plane processor ASIC, and Hough transform on an associated FPGA. Additional filtering of Hough line candidates based on temporal windowing is further applied to reduce unrealistic frame-to-frame tracking variations. The proposed methods are implemented in Matlab by using image data captured with adaptive integration time. The simulations are performed in a hardware oriented way to allow real-time implementation of the algorithms on the smart camera system.

  5. Adaptive Signal Processing Testbed

    NASA Astrophysics Data System (ADS)

    Parliament, Hugh A.

    1991-09-01

    The design and implementation of a system for the acquisition, processing, and analysis of signal data is described. The initial application for the system is the development and analysis of algorithms for excision of interfering tones from direct sequence spread spectrum communication systems. The system is called the Adaptive Signal Processing Testbed (ASPT) and is an integrated hardware and software system built around the TMS320C30 chip. The hardware consists of a radio frequency data source, digital receiver, and an adaptive signal processor implemented on a Sun workstation. The software components of the ASPT consists of a number of packages including the Sun driver package; UNIX programs that support software development on the TMS320C30 boards; UNIX programs that provide the control, user interaction, and display capabilities for the data acquisition, processing, and analysis components of the ASPT; and programs that perform the ASPT functions including data acquisition, despreading, and adaptive filtering. The performance of the ASPT system is evaluated by comparing actual data rates against their desired values. A number of system limitations are identified and recommendations are made for improvements.

  6. Retinal Imaging: Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Goncharov, A. S.; Iroshnikov, N. G.; Larichev, Andrey V.

    This chapter describes several factors influencing the performance of ophthalmic diagnostic systems with adaptive optics compensation of human eye aberration. Particular attention is paid to speckle modulation, temporal behavior of aberrations, and anisoplanatic effects. The implementation of a fundus camera with adaptive optics is considered.

  7. Adaptive processing for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Crane, R. B.; Reyer, J. F.

    1975-01-01

    Analytical and test results on the use of adaptive processing on LANDSAT data are presented. The Kalman filter was used as a framework to contain different adapting techniques. When LANDSAT MSS data were used all of the modifications made to the Kalman filter performed the functions for which they were designed. It was found that adaptive processing could provide compensation for incorrect signature means, within limits. However, if the data were such that poor classification accuracy would be obtained when the correct means were used, then adaptive processing would not improve the accuracy and might well lower it even further.

  8. Adaptive optical ghost imaging through atmospheric turbulence.

    PubMed

    Shi, Dongfeng; Fan, Chengyu; Zhang, Pengfei; Zhang, Jinghui; Shen, Hong; Qiao, Chunhong; Wang, Yingjian

    2012-12-17

    We demonstrate for the first time (to our knowledge) that a high-quality image can still be obtained in atmospheric turbulence by applying adaptive optical ghost imaging (AOGI) system even when conventional ghost imaging system fails to produce an image. The performance of AOGI under different strength of atmospheric turbulence is investigated by simulation. The influence of adaptive optics system with different numbers of adaptive mirror elements on obtained image quality is also studied.

  9. Adaptive image segmentation by quantization

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Yun, David Y.

    1992-12-01

    Segmentation of images into textural homogeneous regions is a fundamental problem in an image understanding system. Most region-oriented segmentation approaches suffer from the problem of different thresholds selecting for different images. In this paper an adaptive image segmentation based on vector quantization is presented. It automatically segments images without preset thresholds. The approach contains a feature extraction module and a two-layer hierarchical clustering module, a vector quantizer (VQ) implemented by a competitive learning neural network in the first layer. A near-optimal competitive learning algorithm (NOLA) is employed to train the vector quantizer. NOLA combines the advantages of both Kohonen self- organizing feature map (KSFM) and K-means clustering algorithm. After the VQ is trained, the weights of the network and the number of input vectors clustered by each neuron form a 3- D topological feature map with separable hills aggregated by similar vectors. This overcomes the inability to visualize the geometric properties of data in a high-dimensional space for most other clustering algorithms. The second clustering algorithm operates in the feature map instead of the input set itself. Since the number of units in the feature map is much less than the number of feature vectors in the feature set, it is easy to check all peaks and find the `correct' number of clusters, also a key problem in current clustering techniques. In the experiments, we compare our algorithm with K-means clustering method on a variety of images. The results show that our algorithm achieves better performance.

  10. Approach for reconstructing anisoplanatic adaptive optics images.

    PubMed

    Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J

    2007-08-20

    Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366

  11. Filter for biomedical imaging and image processing

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.; Ahmad, Imteyaz

    2006-07-01

    Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

  12. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    SciTech Connect

    Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-08-15

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then

  13. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  14. AIDA: An Adaptive Image Deconvolution Algorithm

    NASA Astrophysics Data System (ADS)

    Hom, Erik; Marchis, F.; Lee, T. K.; Haase, S.; Agard, D. A.; Sedat, J. W.

    2007-10-01

    We recently described an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging [Hom et al., J. Opt. Soc. Am. A 24, 1580 (2007)]. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. AIDA includes a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. Here, we present a gallery of results demonstrating the effectiveness of AIDA in processing planetary science images acquired using adaptive-optics systems. Offered as an open-source alternative to MISTRAL, AIDA is available for download and further development at: http://msg.ucsf.edu/AIDA. This work was supported in part by the W. M. Keck Observatory, the National Institutes of Health, NASA, the National Science Foundation Science and Technology Center for Adaptive Optics at UC-Santa Cruz, and the Howard Hughes Medical Institute.

  15. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  16. Biomedical image processing

    SciTech Connect

    Huang, H.K.

    1981-01-01

    Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed.

  17. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  18. Hyperspectral image processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  19. Subroutines For Image Processing

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

    1988-01-01

    Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

  20. Medical image processing system

    NASA Astrophysics Data System (ADS)

    Wang, Dezong; Wang, Jinxiang

    1994-12-01

    In this paper a medical image processing system is described. That system is named NAI200 Medical Image Processing System and has been appraised by Chinese Government. Principles and cases provided here. Many kinds of pictures are used in modern medical diagnoses, for example B-supersonic, X-ray, CT and MRI. Some times the pictures are not good enough for diagnoses. The noises interfere with real situation on these pictures. That means the image processing is needed. A medical image processing system is described in this paper. That system is named NAI200 Medical Image Processing System and has been appraised by Chinese Government. There are four functions in that system. The first part is image processing. More than thirty four programs are involved. The second part is calculating. The areas or volumes of single or multitissues are calculated. Three dimensional reconstruction is the third part. The stereo images of organs or tumors are reconstructed with cross-sections. The last part is image storage. All pictures can be transformed to digital images, then be stored in hard disk or soft disk. In this paper not only all functions of that system are introduced, also the basic principles of these functions are explained in detail. This system has been applied in hospitals. The images of hundreds of cases have been processed. We describe the functions combining real cases. Here we only introduce a few examples.

  1. Image processing in medicine

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans

    2001-12-01

    This article is divided into two parts: the first is an opinion, the second is a description. The opinion is that diagnostic medical imaging is not a detection problem. The description is of a specific medical image-processing program. Why the opinion? If medical imaging were a detection problem, then image processing would unimportant. However, image processing is crucial. We illustrate this fact using three examples ultrasound, magnetic resonance imaging and, most poignantly, computed radiography. Although the examples are anecdotal they are illustrative. The description is of the image-processing program ImprocRAD written by one of the authors (Dallas). First we will discuss the motivation for creating yet another image processing program including system characterization which is an area of expertise of one of the authors (Roehrig). We will then look at the structure of the program and finally, to the point, the specific application: mammographic diagnostic reading. We will mention rapid display of mammogram image sets and then discuss processing. In that context, we describe a real-time image-processing tool we term the MammoGlass.

  2. Adaptive fingerprint image enhancement with emphasis on preprocessing of data.

    PubMed

    Bartůnek, Josef Ström; Nilsson, Mikael; Sällberg, Benny; Claesson, Ingvar

    2013-02-01

    This article proposes several improvements to an adaptive fingerprint enhancement method that is based on contextual filtering. The term adaptive implies that parameters of the method are automatically adjusted based on the input fingerprint image. Five processing blocks comprise the adaptive fingerprint enhancement method, where four of these blocks are updated in our proposed system. Hence, the proposed overall system is novel. The four updated processing blocks are: 1) preprocessing; 2) global analysis; 3) local analysis; and 4) matched filtering. In the preprocessing and local analysis blocks, a nonlinear dynamic range adjustment method is used. In the global analysis and matched filtering blocks, different forms of order statistical filters are applied. These processing blocks yield an improved and new adaptive fingerprint image processing method. The performance of the updated processing blocks is presented in the evaluation part of this paper. The algorithm is evaluated toward the NIST developed NBIS software for fingerprint recognition on FVC databases.

  3. Adaptive processing for enhanced target acquisition

    NASA Astrophysics Data System (ADS)

    Page, Scott F.; Smith, Moira I.; Hickman, Duncan; Bernhardt, Mark; Oxford, William; Watson, Norman; Beath, F.

    2009-05-01

    Conventional air-to-ground target acquisition processes treat the image stream in isolation from external data sources. This ignores information that may be available through modern mission management systems which could be fused into the detection process in order to provide enhanced performance. By way of an example relating to target detection, this paper explores the use of a-priori knowledge and other sensor information in an adaptive architecture with the aim of enhancing performance in decision making. The approach taken here is to use knowledge of target size, terrain elevation, sensor geometry, solar geometry and atmospheric conditions to characterise the expected spatial and radiometric characteristics of a target in terms of probability density functions. An important consideration in the construction of the target probability density functions are the known errors in the a-priori knowledge. Potential targets are identified in the imagery and their spatial and expected radiometric characteristics are used to compute the target likelihood. The adaptive architecture is evaluated alongside a conventional non-adaptive algorithm using synthetic imagery representative of an air-to-ground target acquisition scenario. Lastly, future enhancements to the adaptive scheme are discussed as well as strategies for managing poor quality or absent a-priori information.

  4. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  5. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  6. Study Of Adaptive-Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Satorius, Edgar H.; Griffiths, Lloyd

    1990-01-01

    Report describes study of adaptive signal-processing techniques for suppression of mutual satellite interference in mobile (on ground)/satellite communication system. Presents analyses and numerical simulations of performances of two approaches to signal processing for suppression of interference. One approach, known as "adaptive side lobe canceling", second called "adaptive temporal processing".

  7. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  8. Adaptive predictive image coding using local characteristics

    NASA Astrophysics Data System (ADS)

    Hsieh, C. H.; Lu, P. C.; Liou, W. G.

    1989-12-01

    The paper presents an efficient adaptive predictive coding method using the local characteristics of images. In this method, three coding schemes, namely, mean, subsampling combined with fixed DPCM, and ADPCM/PCM, are used and one of these is chosen adaptively based on the local characteristics of images. The prediction parameters of the two-dimensional linear predictor in the ADPCM/PCM are extracted on a block by block basis. Simulation results show that the proposed method is effective in reducing the slope overload distortion and the granular noise at low bit rates, and thus it can improve the visual quality of reconstructed images.

  9. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  10. Adaptive filtering image preprocessing for smart FPA technology

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey W.

    1995-05-01

    This paper discusses two applications of adaptive filters for image processing on parallel architectures. The first, based on the results of previously accomplished work, summarizes the analyses of various adaptive filters implemented for pixel-level image prediction. FIR filters, fixed and adaptive IIR filters, and various variable step size algorithms were compared with a focus on algorithm complexity against the ability to predict future pixel values. A gaussian smoothing operation with varying spatial and temporal constants were also applied for comparisons of random noise reductions. The second application is a suggestion to use memory-adaptive IIR filters for detecting and tracking motion within an image. Objects within an image are made of edges, or segments, with varying degrees of motion. An application has been previously published that describes FIR filters connecting pixels and using correlations to determine motion and direction. This implementation seems limited to detecting motion coinciding with FIR filter operation rate and the associated harmonics. Upgrading the FIR structures with adaptive IIR structures can eliminate these limitations. These and any other pixel-level adaptive filtering application require data memory for filter parameters and some basic computational capability. Tradeoffs have to be made between chip real estate and these desired features. System tradeoffs will also have to be made as to where it makes the most sense to do which level of processing. Although smart pixels may not be ready to implement adaptive filters, applications such as these should give the smart pixel designer some long range goals.

  11. Adaptive optics and phase diversity imaging for responsive space applications.

    SciTech Connect

    Smith, Mark William; Wick, David Victor

    2004-11-01

    The combination of phase diversity and adaptive optics offers great flexibility. Phase diverse images can be used to diagnose aberrations and then provide feedback control to the optics to correct the aberrations. Alternatively, phase diversity can be used to partially compensate for aberrations during post-detection image processing. The adaptive optic can produce simple defocus or more complex types of phase diversity. This report presents an analysis, based on numerical simulations, of the efficiency of different modes of phase diversity with respect to compensating for specific aberrations during post-processing. It also comments on the efficiency of post-processing versus direct aberration correction. The construction of a bench top optical system that uses a membrane mirror as an active optic is described. The results of characterization tests performed on the bench top optical system are presented. The work described in this report was conducted to explore the use of adaptive optics and phase diversity imaging for responsive space applications.

  12. Adaptive optics imaging of the retina.

    PubMed

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified.

  13. Adaptive optics imaging of the retina

    PubMed Central

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified. PMID:24492503

  14. BAOlab: Image processing program

    NASA Astrophysics Data System (ADS)

    Larsen, Søren S.

    2014-03-01

    BAOlab is an image processing package written in C that should run on nearly any UNIX system with just the standard C libraries. It reads and writes images in standard FITS format; 16- and 32-bit integer as well as 32-bit floating-point formats are supported. Multi-extension FITS files are currently not supported. Among its tools are ishape for size measurements of compact sources, mksynth for generating synthetic images consisting of a background signal including Poisson noise and a number of pointlike sources, imconvol for convolving two images (a “source” and a “kernel”) with each other using fast fourier transforms (FFTs) and storing the output as a new image, and kfit2d for fitting a two-dimensional King model to an image.

  15. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  16. Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS)

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.

    2006-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) project is creating a record of forest disturbance and regrowth for North America from the Landsat satellite record, in support of the carbon modeling activities. LEDAPS relies on the decadal Landsat GeoCover data set supplemented by dense image time series for selected locations. Imagery is first atmospherically corrected to surface reflectance, and then change detection algorithms are used to extract disturbance area, type, and frequency. Reuse of the MODIS Land processing system (MODAPS) architecture allows rapid throughput of over 2200 MSS, TM, and ETM+ scenes. Initial ("Beta") surface reflectance products are currently available for testing, and initial continental disturbance products will be available by the middle of 2006.

  17. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  18. Adaptation and visual search in mammographic images.

    PubMed

    Kompaniez-Dunigan, Elysse; Abbey, Craig K; Boone, John M; Webster, Michael A

    2015-05-01

    Radiologists face the visually challenging task of detecting suspicious features within the complex and noisy backgrounds characteristic of medical images. We used a search task to examine whether the salience of target features in x-ray mammograms could be enhanced by prior adaptation to the spatial structure of the images. The observers were not radiologists, and thus had no diagnostic training with the images. The stimuli were randomly selected sections from normal mammograms previously classified with BIRADS Density scores of "fatty" versus "dense," corresponding to differences in the relative quantities of fat versus fibroglandular tissue. These categories reflect conspicuous differences in visual texture, with dense tissue being more likely to obscure lesion detection. The targets were simulated masses corresponding to bright Gaussian spots, superimposed by adding the luminance to the background. A single target was randomly added to each image, with contrast varied over five levels so that they varied from difficult to easy to detect. Reaction times were measured for detecting the target location, before or after adapting to a gray field or to random sequences of a different set of dense or fatty images. Observers were faster at detecting the targets in either dense or fatty images after adapting to the specific background type (dense or fatty) that they were searching within. Thus, the adaptation led to a facilitation of search performance that was selective for the background texture. Our results are consistent with the hypothesis that adaptation allows observers to more effectively suppress the specific structure of the background, thereby heightening visual salience and search efficiency.

  19. Image processing occupancy sensor

    DOEpatents

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  20. Introduction to project ALIAS: adaptive-learning image analysis system

    NASA Astrophysics Data System (ADS)

    Bock, Peter

    1992-03-01

    As an alternative to preprogrammed rule-based artificial intelligence, collective learning systems theory postulates a hierarchical network of cellular automata which acquire their knowledge through learning based on a series of trial-and-error interactions with an evaluating environment, much as humans do. The input to the hierarchical network is provided by a set of sensors which perceive the external world. Using both this perceived information and past experience (memory), the learning automata synthesize collections of trial responses, periodically modifying their memories based on internal evaluations or external evaluations from the environment. Based on collective learning systems theory, an adaptive transputer- based image-processing engine comprising a three-layer hierarchical network of 32 learning cells and 33 nonlearning cells has been applied to a difficult image processing task: the scale, phase, and translation-invariant detection of anomalous features in otherwise `normal' images. Known as adaptive learning image analysis system (ALIAS), this parallel-processing engine has been constructed and tested at the Research institute for Applied Knowledge Processing (FAW) in Ulm, Germany under the sponsorship of Robert Bosch GmbH. Results demonstrate excellent detection, discrimination, and localization of anomalies in binary images. Recent enhancements include the ability to process gray-scale images and the automatic supervised segmentation and classification of images. Current research is directed toward the processing of time-series data and the hierarchical extension of ALIAS from the sub-symbolic level to the higher levels of symbolic association.

  1. Adaptive enhancement method of infrared image based on scene feature

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    All objects emit radiation in amounts related to their temperature and their ability to emit radiation. The infrared image shows the invisible infrared radiation emitted directly. Because of the advantages, the technology of infrared imaging is applied to many kinds of fields. But compared with visible image, the disadvantages of infrared image are obvious. The characteristics of low luminance, low contrast and the inconspicuous difference target and background are the main disadvantages of infrared image. The aim of infrared image enhancement is to improve the interpretability or perception of information in infrared image for human viewers, or to provide 'better' input for other automated image processing techniques. Most of the adaptive algorithm for image enhancement is mainly based on the gray-scale distribution of infrared image, and is not associated with the actual image scene of the features. So the pertinence of infrared image enhancement is not strong, and the infrared image is not conducive to the application of infrared surveillance. In this paper we have developed a scene feature-based algorithm to enhance the contrast of infrared image adaptively. At first, after analyzing the scene feature of different infrared image, we have chosen the feasible parameters to describe the infrared image. In the second place, we have constructed the new histogram distributing base on the chosen parameters by using Gaussian function. In the last place, the infrared image is enhanced by constructing a new form of histogram. Experimental results show that the algorithm has better performance than other methods mentioned in this paper for infrared scene images.

  2. Investigations in adaptive processing of multispectral data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Horwitz, H. M.

    1973-01-01

    Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.

  3. [An adaptive threshloding segmentation method for urinary sediment image].

    PubMed

    Li, Yongming; Zeng, Xiaoping; Qin, Jian; Han, Liang

    2009-02-01

    In this paper is proposed a new method to solve the segmentation of the complicated defocusing urinary sediment image. The main points of the method are: (1) using wavelet transforms and morphology to erase the effect of defocusing and realize the first segmentation, (2) using adaptive threshold processing in accordance to the subimages after wavelet processing, and (3) using 'peel off' algorithm to deal with the overlapped cells' segmentations. The experimental results showed that this method was not affected by the defocusing, and it made good use of many kinds of characteristics of the images. So this new mehtod can get very precise segmentation; it is effective for defocusing urinary sediment image segmentation.

  4. An adaptive signal-processing approach to online adaptive tutoring.

    PubMed

    Bergeron, Bryan; Cline, Andrew

    2011-01-01

    Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.

  5. Programmable Image Processing Element

    NASA Astrophysics Data System (ADS)

    Eversole, W. L.; Salzman, J. F.; Taylor, F. V.; Harland, W. L.

    1982-07-01

    The algorithmic solution to many image-processing problems frequently uses sums of products where each multiplicand is an input sample (pixel) and each multiplier is a stored coefficient. This paper presents a large-scale integrated circuit (LSIC) implementation that provides accumulation of nine products and discusses its evolution from design through application 'A read-only memory (ROM) accumulate algorithm is used to perform the multiplications and is the key to one-chip implementation. The ROM function is actually implemented with erasable programmable ROM (EPROM) to allow reprogramming of the circuit to a variety of different functions. A real-time brassboard is being constructed to demonstrate four different image-processing operations on TV images.

  6. Real-time adaptive video image enhancement

    NASA Astrophysics Data System (ADS)

    Garside, John R.; Harrison, Chris G.

    1999-07-01

    As part of a continuing collaboration between the University of Manchester and British Aerospace, a signal processing array has been constructed to demonstrate that it is feasible to compensate a video signal for the degradation caused by atmospheric haze in real-time. Previously reported work has shown good agreement between a simple physical model of light scattering by atmospheric haze and the observed loss of contrast. This model predicts a characteristic relationship between contrast loss in the image and the range from the camera to the scene. For an airborne camera, the slant-range to a point on the ground may be estimated from the airplane's pose, as reported by the inertial navigation system, and the contrast may be obtained from the camera's output. Fusing data from these two streams provides a means of estimating model parameters such as the visibility and the overall illumination of the scene. This knowledge allows the same model to be applied in reverse, thus restoring the contrast lost to atmospheric haze. An efficient approximation of range is vital for a real-time implementation of the method. Preliminary results show that an adaptive approach to fitting the model's parameters, exploiting the temporal correlation between video frames, leads to a robust implementation with a significantly accelerated throughput.

  7. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  8. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  9. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  10. Photoreceptor processes in visual adaptation.

    PubMed

    Ripps, H; Pepperberg, D R

    1987-01-01

    In this paper we have stressed two experimental results in need of explanation: (i) the reduced efficacy with which (remaining, abundant) rhodopsin in the light-adapted receptor mediates the flash response; and (ii) the disparity in conditions of irradiation (weak background vs. extensive bleaching) leading to equivalent conditions of threshold. The model discussed above suggests, in molecular terms, a possible basis for both properties of receptor adaptation. On the view developed here, property (i) derives from the ability of photoactivated or bleached pigment (R or B) to restrict dramatically the availability of a substance required for phototransduction. Property (ii) derives in large part from the pronounced disparity in the effectiveness of R (during illumination) and B (remaining after illumination) in reducing the availability of this substance. On this view, the "equivalence" of threshold elevation in states of light- vs. dark-adaptation derives from an overall equality of a product of factors (Q, Etot/Es, and J of equation 2). Under all but extreme conditions, this aggregate of factors is dominated by the term Etot/Es, reflecting the functional state of E. PMID:3317149

  11. Digital image processing.

    PubMed

    Lo, Winnie Y; Puchalski, Sarah M

    2008-01-01

    Image processing or digital image manipulation is one of the greatest advantages of digital radiography (DR). Preprocessing depends on the modality and corrects for system irregularities such as differential light detection efficiency, dead pixels, or dark noise. Processing is manipulation of the raw data just after acquisition. It is generally proprietary and specific to the DR vendor but encompasses manipulations such as unsharp mask filtering within two or more spatial frequency bands, histogram sliding and stretching, and gray scale rendition or lookup table application. These processing steps have a profound effect on the final appearance of the radiograph, but they can also lead to artifacts unique to digital systems. Postprocessing refers to manipulation of the final appearance of the radiograph by the end-user and does not involve alteration of the raw data.

  12. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.]. PMID:27548458

  13. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.].

  14. Self-adaptive iris image acquisition system

    NASA Astrophysics Data System (ADS)

    Dong, Wenbo; Sun, Zhenan; Tan, Tieniu; Qiu, Xianchao

    2008-03-01

    Iris image acquisition is the fundamental step of the iris recognition, but capturing high-resolution iris images in real-time is very difficult. The most common systems have small capture volume and demand users to fully cooperate with machines, which has become the bottleneck of iris recognition's application. In this paper, we aim at building an active iris image acquiring system which is self-adaptive to users. Two low resolution cameras are co-located in a pan-tilt-unit (PTU), for face and iris image acquisition respectively. Once the face camera detects face region in real-time video, the system controls the PTU to move towards the eye region and automatically zooms, until the iris camera captures an clear iris image for recognition. Compared with other similar works, our contribution is that we use low-resolution cameras, which can transmit image data much faster and are much cheaper than the high-resolution cameras. In the system, we use Haar-like cascaded feature to detect faces and eyes, linear transformation to predict the iris camera's position, and simple heuristic PTU control method to track eyes. A prototype device has been established, and experiments show that our system can automatically capture high-quality iris image in the range of 0.6m×0.4m×0.4m in average 3 to 5 seconds.

  15. Optical Profilometers Using Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Hall, Gregory A.; Youngquist, Robert; Mikhael, Wasfy

    2006-01-01

    A method of adaptive signal processing has been proposed as the basis of a new generation of interferometric optical profilometers for measuring surfaces. The proposed profilometers would be portable, hand-held units. Sizes could be thus reduced because the adaptive-signal-processing method would make it possible to substitute lower-power coherent light sources (e.g., laser diodes) for white light sources and would eliminate the need for most of the optical components of current white-light profilometers. The adaptive-signal-processing method would make it possible to attain scanning ranges of the order of decimeters in the proposed profilometers.

  16. Perceived Image Quality Improvements from the Application of Image Deconvolution to Retinal Images from an Adaptive Optics Fundus Imager

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Nemeth, S. C.; Erry, G. R. G.; Otten, L. J.; Yang, S. Y.

    Aim: The objective of this project was to apply an image restoration methodology based on wavefront measurements obtained with a Shack-Hartmann sensor and evaluating the restored image quality based on medical criteria.Methods: Implementing an adaptive optics (AO) technique, a fundus imager was used to achieve low-order correction to images of the retina. The high-order correction was provided by deconvolution. A Shack-Hartmann wavefront sensor measures aberrations. The wavefront measurement is the basis for activating a deformable mirror. Image restoration to remove remaining aberrations is achieved by direct deconvolution using the point spread function (PSF) or a blind deconvolution. The PSF is estimated using measured wavefront aberrations. Direct application of classical deconvolution methods such as inverse filtering, Wiener filtering or iterative blind deconvolution (IBD) to the AO retinal images obtained from the adaptive optical imaging system is not satisfactory because of the very large image size, dificulty in modeling the system noise, and inaccuracy in PSF estimation. Our approach combines direct and blind deconvolution to exploit available system information, avoid non-convergence, and time-consuming iterative processes. Results: The deconvolution was applied to human subject data and resulting restored images compared by a trained ophthalmic researcher. Qualitative analysis showed significant improvements. Neovascularization can be visualized with the adaptive optics device that cannot be resolved with the standard fundus camera. The individual nerve fiber bundles are easily resolved as are melanin structures in the choroid. Conclusion: This project demonstrated that computer-enhanced, adaptive optic images have greater detail of anatomical and pathological structures.

  17. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  18. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  19. Implementation of Multispectral Image Classification on a Remote Adaptive Computer

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna

    1999-01-01

    As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).

  20. Adaptive thresholding of digital subtraction angiography images

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Li, Heng; Peng, Weixue; Zhang, Tianxu

    2005-10-01

    In clinical practice, digital subtraction angiography (DSA) is a powerful technique for the visualization of blood vessels in the human body. Blood vessel segmentation is a main problem for 3D vascular reconstruction. In this paper, we propose a new adaptive thresholding method for the segmentation of DSA images. Each pixel of the DSA images is declared to be a vessel/background point with regard to a threshold and a few local characteristic limits depending on some information contained in the pixel neighborhood window. The size of the neighborhood window is set according to a priori knowledge of the diameter of vessels to make sure that each window contains the background definitely. Some experiments on cerebral DSA images are given, which show that our proposed method yields better results than global thresholding methods and some other local thresholding methods do.

  1. Digital Image Watermarking via Adaptive Logo Texturization.

    PubMed

    Andalibi, Mehran; Chandler, Damon M

    2015-12-01

    Grayscale logo watermarking is a quite well-developed area of digital image watermarking which seeks to embed into the host image another smaller logo image. The key advantage of such an approach is the ability to visually analyze the extracted logo for rapid visual authentication and other visual tasks. However, logos pose new challenges for invisible watermarking applications which need to keep the watermark imperceptible within the host image while simultaneously maintaining robustness to attacks. This paper presents an algorithm for invisible grayscale logo watermarking that operates via adaptive texturization of the logo. The central idea of our approach is to recast the watermarking task into a texture similarity task. We first separate the host image into sufficiently textured and poorly textured regions. Next, for textured regions, we transform the logo into a visually similar texture via the Arnold transform and one lossless rotation; whereas for poorly textured regions, we use only a lossless rotation. The iteration for the Arnold transform and the angle of lossless rotation are determined by a model of visual texture similarity. Finally, for each region, we embed the transformed logo into that region via a standard wavelet-based embedding scheme. We employ a multistep extraction stage, in which an affine parameter estimation is first performed to compensate for possible geometrical transformations. Testing with multiple logos on a database of host images and under a variety of attacks demonstrates that the proposed algorithm yields better overall performance than competing methods.

  2. Speckle reduction in ultrasound images using nonisotropic adaptive filtering.

    PubMed

    Eom, Kie B

    2011-10-01

    In this article, a speckle reduction approach for ultrasound imaging that preserves important features such as edges, corners and point targets is presented. Speckle reduction is an important problem in coherent imaging, such as ultrasound imaging or synthetic aperture radar, and many speckle reduction algorithms have been developed. Speckle is a non-additive and non-white process and the reduction of speckle without blurring sharp features is known to be difficult. The new speckle reduction algorithm presented in this article utilizes a nonhomogeneous filter that adapts to the proximity and direction of the nearest important features. To remove speckle without blurring important features, the location and direction of edges in the image are estimated. Then for each pixel in the image, the distance and angle to the nearest edge are efficiently computed by a two-pass algorithm and stored in distance and angle maps. Finally for each pixel, an adaptive directional filter aligned to the nearest edge is applied. The shape and orientation of the adaptive filter are determined from the distance and angle maps. The new speckle reduction algorithm is tested with both synthesized and real ultrasound images. The performance of the new algorithm is also compared with those of other speckle reduction approaches and it is shown that the new algorithm performs favorably in reducing speckle without blurring important features.

  3. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  4. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  5. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1997-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1 sec. The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have been using adaptive optics (AO) on a 4-m class telescope to obtain 0.1 sec resolution images solar system objects at far red and near infrared wavelengths (0.7-2.5 micron) which best discriminate their spectral signatures. Our efforts has been put into areas of research for which high angular resolution is essential, such as the mapping of Titan and of large asteroids, the dynamics and composition of Neptune stratospheric clouds, the infrared photometry of Pluto, Charon, and close satellites previously undetected from the ground.

  6. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1999-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1". The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have successfully used adaptive optics on a 4-m class telescope to obtain 0.1" resolution images of solar system objects in the far red and near infrared (0.7-2.5 microns), aE wavelengths which best discl"lmlnate their spectral signatures. Our efforts have been put into areas of research for which high angular resolution is essential.

  7. Adaptive optics retinal imaging: emerging clinical applications.

    PubMed

    Godara, Pooja; Dubis, Adam M; Roorda, Austin; Duncan, Jacque L; Carroll, Joseph

    2010-12-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy and spectral domain-optical coherence tomography provide clinicians with remarkably clear pictures of the living retina. Although the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, the same optics induce significant aberrations that obviate cellular-resolution imaging in most cases. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. When applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, retinal pigment epithelium cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here, we review some of the advances that were made possible with AO imaging of the human retina and discuss applications and future prospects for clinical imaging.

  8. Speckle image reconstruction of the adaptive optics solar images.

    PubMed

    Zhong, Libo; Tian, Yu; Rao, Changhui

    2014-11-17

    Speckle image reconstruction, in which the speckle transfer function (STF) is modeled as annular distribution according to the angular dependence of adaptive optics (AO) compensation and the individual STF in each annulus is obtained by the corresponding Fried parameter calculated from the traditional spectral ratio method, is used to restore the solar images corrected by AO system in this paper. The reconstructions of the solar images acquired by a 37-element AO system validate this method and the image quality is improved evidently. Moreover, we found the photometric accuracy of the reconstruction is field dependent due to the influence of AO correction. With the increase of angular separation of the object from the AO lockpoint, the relative improvement becomes approximately more and more effective and tends to identical in the regions far away the central field of view. The simulation results show this phenomenon is mainly due to the disparity of the calculated STF from the real AO STF with the angular dependence.

  9. Adaptive texture filtering for defect inspection in ultrasound images

    NASA Astrophysics Data System (ADS)

    Zmola, Carl; Segal, Andrew C.; Lovewell, Brian; Nash, Charles

    1993-05-01

    The use of ultrasonic imaging to analyze defects and characterize materials is critical in the development of non-destructive testing and non-destructive evaluation (NDT/NDE) tools for manufacturing. To develop better quality control and reliability in the manufacturing environment advanced image processing techniques are useful. For example, through the use of texture filtering on ultrasound images, we have been able to filter characteristic textures from highly-textured C-scan images of materials. The materials have highly regular characteristic textures which are of the same resolution and dynamic range as other important features within the image. By applying texture filters and adaptively modifying their filter response, we have examined a family of filters for removing these textures.

  10. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  11. Adaptive image enhancement of text images that contain touching or broken characters

    SciTech Connect

    Stubberud, P.; Kalluri, V.; Kanai, J.

    1994-11-29

    Text images that contain touching or broken characters can significantly degrade the accuracy of optical character recognition (OCR) systems. This paper proposes an adaptive image restoration technique that can improve OCR accuracy by enhancing touching or broken character images. The technique begins by processing a distorted text image with an OCR system. Using the distorted text image and output information from the OCR system, an inverse model of the distortion that caused the touching or broken character problem is generated. After generating the inverse model, the unrecognized distorted characters are filtered by the inverse model and then processes by the OCR system. To demonstrate its feasibility, six distorted text images were processed using this technique. Four of the text images, two with touching characters and two with broken characters, were synthesized using mathematical distortion models. The remaining two distorted text images, one with touching characters and one with broken characters, were distorted using a photocopier. The performance of the adaptive image restoration technique was measured using pixel accuracy and OCR improvement. The examples demonstrate that this technique can improve both the pixel and OCR accuracy of text images containing touching or broken characters.

  12. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  13. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  14. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  15. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.

  16. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  17. Retinal imaging using adaptive optics technology☆

    PubMed Central

    Kozak, Igor

    2014-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  18. Retinal imaging using adaptive optics technology.

    PubMed

    Kozak, Igor

    2014-04-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  19. Extreme Adaptive Optics Planet Imager: XAOPI

    SciTech Connect

    Macintosh, B A; Graham, J; Poyneer, L; Sommargren, G; Wilhelmsen, J; Gavel, D; Jones, S; Kalas, P; Lloyd, J; Makidon, R; Olivier, S; Palmer, D; Patience, J; Perrin, M; Severson, S; Sheinis, A; Sivaramakrishnan, A; Troy, M; Wallace, K

    2003-09-17

    Ground based adaptive optics is a potentially powerful technique for direct imaging detection of extrasolar planets. Turbulence in the Earth's atmosphere imposes some fundamental limits, but the large size of ground-based telescopes compared to spacecraft can work to mitigate this. We are carrying out a design study for a dedicated ultra-high-contrast system, the eXtreme Adaptive Optics Planet Imager (XAOPI), which could be deployed on an 8-10m telescope in 2007. With a 4096-actuator MEMS deformable mirror it should achieve Strehl >0.9 in the near-IR. Using an innovative spatially filtered wavefront sensor, the system will be optimized to control scattered light over a large radius and suppress artifacts caused by static errors. We predict that it will achieve contrast levels of 10{sup 7}-10{sup 8} at angular separations of 0.2-0.8 inches around a large sample of stars (R<7-10), sufficient to detect Jupiter-like planets through their near-IR emission over a wide range of ages and masses. We are constructing a high-contrast AO testbed to verify key concepts of our system, and present preliminary results here, showing an RMS wavefront error of <1.3 nm with a flat mirror.

  20. ASPIC: STARLINK image processing package

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.

    2015-10-01

    ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

  1. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  2. A model for radar images and its application to adaptive digital filtering of multiplicative noise.

    PubMed

    Frost, V S; Stiles, J A; Shanmugan, K S; Holtzman, J C

    1982-02-01

    Standard image processing techniques which are used to enhance noncoherent optically produced images are not applicable to radar images due to the coherent nature of the radar imaging process. A model for the radar imaging process is derived in this paper and a method for smoothing noisy radar images is also presented. The imaging model shows that the radar image is corrupted by multiplicative noise. The model leads to the functional form of an optimum (minimum MSE) filter for smoothing radar images. By using locally estimated parameter values the filter is made adaptive so that it provides minimum MSE estimates inside homogeneous areas of an image while preserving the edge structure. It is shown that the filter can be easily implemented in the spatial domain and is computationally efficient. The performance of the adaptive filter is compared (qualitatively and quantitatively) with several standard filters using real and simulated radar images.

  3. Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images

    PubMed Central

    Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2016-01-01

    Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors. PMID:27786240

  4. Image processing in digital radiography.

    PubMed

    Freedman, M T; Artz, D S

    1997-01-01

    Image processing is a critical part of obtaining high-quality digital radiographs. Fortunately, the user of these systems does not need to understand image processing in detail, because the manufacturers provide good starting values. Because radiologists may have different preferences in image appearance, it is helpful to know that many aspects of image appearance can be changed by image processing, and a new preferred setting can be loaded into the computer and saved so that it can become the new standard processing method. Image processing allows one to change the overall optical density of an image and to change its contrast. Spatial frequency processing allows an image to be sharpened, improving its appearance. It also allows noise to be blurred so that it is less visible. Care is necessary to avoid the introduction of artifacts or the hiding of mediastinal tubes.

  5. An adaptive optics imaging system designed for clinical use.

    PubMed

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R; Rossi, Ethan A

    2015-06-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2-3 arc minutes, (arcmin) 2) ~0.5-0.8 arcmin and, 3) ~0.05-0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3-5 arcmin, 2) ~0.7-1.1 arcmin and 3) ~0.07-0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing.

  6. An adaptive optics imaging system designed for clinical use.

    PubMed

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R; Rossi, Ethan A

    2015-06-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2-3 arc minutes, (arcmin) 2) ~0.5-0.8 arcmin and, 3) ~0.05-0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3-5 arcmin, 2) ~0.7-1.1 arcmin and 3) ~0.07-0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing. PMID:26114033

  7. Adaptive Optics Imaging of Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Herman, Miranda; Waaler, Mason; Patience, Jennifer; Ward-Duong, Kimberly; Rajan, Abhijith; McCarthy, Don; Kulesa, Craig; Wilson, Paul A.

    2016-01-01

    With the Arizona Infrared imager and Echelle Spectrograph (ARIES) instrument on the 6.5m MMT telescope, we obtained high angular resolution adaptive optics images of 12 exoplanet host stars. The targets are all systems with exoplanets in extremely close orbits such that the planets transit the host stars and cause regular brightness changes in the stars. The transit depth of the light curve is used to infer the radius and, in combination with radial velocity measurements, the density of the planet, but the results can be biased if the light from the host star is the combined light of a pair of stars in a binary system or a chance alignment of two stars. Given the high frequency of binary star systems and the increasing number of transit exoplanet discoveries from Kepler, K2, and anticipated discoveries with the Transiting Exoplanet Survey Satellite (TESS), this is a crucial point to consider when interpreting exoplanet properties. Companions were identified around five of the twelve targets at separations close enough that the brightness measurements of these host stars are in fact the combined brightness of two stars. Images of the resolved stellar systems and reanalysis of the exoplanet properties accounting for the presence of two stars are presented.

  8. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-08-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  9. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  10. Adaptive Optics Imaging and Spectroscopy of Neptune

    NASA Technical Reports Server (NTRS)

    Johnson, Lindley (Technical Monitor); Sromovsky, Lawrence A.

    2005-01-01

    OBJECTIVES: We proposed to use high spectral resolution imaging and spectroscopy of Neptune in visible and near-IR spectral ranges to advance our understanding of Neptune s cloud structure. We intended to use the adaptive optics (AO) system at Mt. Wilson at visible wavelengths to try to obtain the first groundbased observations of dark spots on Neptune; we intended to use A 0 observations at the IRTF to obtain near-IR R=2000 spatially resolved spectra and near-IR A0 observations at the Keck observatory to obtain the highest spatial resolution studies of cloud feature dynamics and atmospheric motions. Vertical structure of cloud features was to be inferred from the wavelength dependent absorption of methane and hydrogen,

  11. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  12. Neural Adaptation Effects in Conceptual Processing

    PubMed Central

    Marino, Barbara F. M.; Borghi, Anna M.; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia

    2015-01-01

    We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view. PMID:26264031

  13. Bayesian nonparametric adaptive control using Gaussian processes.

    PubMed

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  14. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  15. Haar wavelet processor for adaptive on-line image compression

    NASA Astrophysics Data System (ADS)

    Diaz, F. Javier; Buron, Angel M.; Solana, Jose M.

    2005-06-01

    An image coding processing scheme based on a variant of the Haar Wavelet Transform that uses only addition and subtraction is presented. After computing the transform, the selection and coding of the coefficients is performed using a methodology optimized to attain the lowest hardware implementation complexity. Coefficients are sorted in groups according to the number of pixels used in their computing. The idea behind it is to use a different threshold for each group of coefficients; these thresholds are obtained recurrently from an initial one. Parameter values used to achieve the desired compression level are established "on-line", adapting their values to each image, which leads to an improvement in the quality obtained for a preset compression level. Despite its adaptive characteristic, the coding scheme presented leads to a hardware implementation of markedly low circuit complexity. The compression reached for images of 512x512 pixels (256 grey levels) is over 22:1 (~0.4 bits/pixel) with a rmse of 8-10%. An image processor (excluding memory) prototype designed to compute the proposed transform has been implemented using FPGA chips. The processor for images of 256x256 pixels has been implemented using only one general-purpose low-cost FPGA chip, thus proving the design reliability and its relative simplicity.

  16. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  17. Modular on-board adaptive imaging

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Williams, D. S.

    1978-01-01

    Feature extraction involves the transformation of a raw video image to a more compact representation of the scene in which relevant information about objects of interest is retained. The task of the low-level processor is to extract object outlines and pass the data to the high-level process in a format that facilitates pattern recognition tasks. Due to the immense computational load caused by processing a 256x256 image, even a fast minicomputer requires a few seconds to complete this low-level processing. It is, therefore, necessary to consider hardware implementation of these low-level functions to achieve real-time processing speeds. The considered project had the objective to implement a system in which the continuous feature extraction process is not affected by the dynamic changes in the scene, varying lighting conditions, or object motion relative to the cameras. Due to the high bandwidth (3.5 MHz) and serial nature of the TV data, a pipeline processing scheme was adopted as the overall architecture of this system. Modularity in the system is achieved by designing circuits that are generic within the overall system.

  18. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  19. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  20. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  1. Adaptive Fusion of Stochastic Information for Imaging Fractured Vadose Zones

    NASA Astrophysics Data System (ADS)

    Daniels, J.; Yeh, J.; Illman, W.; Harri, S.; Kruger, A.; Parashar, M.

    2004-12-01

    A stochastic information fusion methodology is developed to assimilate electrical resistivity tomography, high-frequency ground penetrating radar, mid-range-frequency radar, pneumatic/gas tracer tomography, and hydraulic/tracer tomography to image fractures, characterize hydrogeophysical properties, and monitor natural processes in the vadose zone. The information technology research will develop: 1) mechanisms and algorithms for fusion of large data volumes ; 2) parallel adaptive computational engines supporting parallel adaptive algorithms and multi-physics/multi-model computations; 3) adaptive runtime mechanisms for proactive and reactive runtime adaptation and optimization of geophysical and hydrological models of the subsurface; and 4) technologies and infrastructure for remote (pervasive) and collaborative access to computational capabilities for monitoring subsurface processes through interactive visualization tools. The combination of the stochastic fusion approach and information technology can lead to a new level of capability for both hydrologists and geophysicists enabling them to "see" into the earth at greater depths and resolutions than is possible today. Furthermore, the new computing strategies will make high resolution and large-scale hydrological and geophysical modeling feasible for the private sector, scientists, and engineers who are unable to access supercomputers, i.e., an effective paradigm for technology transfer.

  2. Photorefractive processing for large adaptive phased arrays.

    PubMed

    Weverka, R T; Wagner, K; Sarto, A

    1996-03-10

    An adaptive null-steering phased-array optical processor that utilizes a photorefractive crystal to time integrate the adaptive weights and null out correlated jammers is described. This is a beam-steering processor in which the temporal waveform of the desired signal is known but the look direction is not. The processor computes the angle(s) of arrival of the desired signal and steers the array to look in that direction while rotating the nulls of the antenna pattern toward any narrow-band jammers that may be present. We have experimentally demonstrated a simplified version of this adaptive phased-array-radar processor that nulls out the narrow-band jammers by using feedback-correlation detection. In this processor it is assumed that we know a priori only that the signal is broadband and the jammers are narrow band. These are examples of a class of optical processors that use the angular selectivity of volume holograms to form the nulls and look directions in an adaptive phased-array-radar pattern and thereby to harness the computational abilities of three-dimensional parallelism in the volume of photorefractive crystals. The development of this processing in volume holographic system has led to a new algorithm for phased-array-radar processing that uses fewer tapped-delay lines than does the classic time-domain beam former. The optical implementation of the new algorithm has the further advantage of utilization of a single photorefractive crystal to implement as many as a million adaptive weights, allowing the radar system to scale to large size with no increase in processing hardware.

  3. Radiological image presentation requires consideration of human adaptation characteristics

    NASA Astrophysics Data System (ADS)

    O'Connell, N. M.; Toomey, R. J.; McEntee, M.; Ryan, J.; Stowe, J.; Adams, A.; Brennan, P. C.

    2008-03-01

    Visualisation of anatomical or pathological image data is highly dependent on the eye's ability to discriminate between image brightnesses and this is best achieved when these data are presented to the viewer at luminance levels to which the eye is adapted. Current ambient light recommendations are often linked to overall monitor luminance but this relies on specific regions of interest matching overall monitor brightness. The current work investigates the luminances of specific regions of interest within three image-types: postero-anterior (PA) chest; PA wrist; computerised tomography (CT) of the head. Luminance levels were measured within the hilar region and peripheral lung distal radius and supra-ventricular grey matter. For each image type average monitor luminances were calculated with a calibrated photometer at ambient light levels of 0, 100 and 400 lux. Thirty samples of each image-type were employed, resulting in a total of over 6,000 measurements. Results demonstrate that average monitor luminances varied from clinically-significant values by up to a factor of 4, 2 and 6 for chest, wrist and CT head images respectively. Values for the thoracic hilum and wrist were higher and for the peripheral lung and CT brain lower than overall monitor levels. The ambient light level had no impact on the results. The results demonstrate that clinically important radiological information for common radiological examinations is not being presented to the viewer in a way that facilitates optimised visual adaptation and subsequent interpretation. The importance of image-processing algorithms focussing on clinically-significant anatomical regions instead of radiographic projections is highlighted.

  4. Frame selection performance limits for statistical image reconstruction of adaptive optics compensated images

    NASA Astrophysics Data System (ADS)

    Ford, Stephen D.

    1994-12-01

    The U.S. Air Force uses adaptive optics systems to collect images of extended objects beyond the atmosphere. These systems use wavefront sensors and deformable mirrors to compensate for atmospheric turbulence induced aberrations. Adaptive optics greatly enhance image quality, however, wavefront aberrations are not completely eliminated. Therefore, post-detection processing techniques are employed to further improve the compensated images. Typically, many short exposure images are collected, recentered to compensate for tilt, and then averaged to overcome randomness in the images and improve signal-to-noise ratio. Experience shows that some short exposure images in a data set are better than others. Frame selection exploits this fact by using a quality metric to discard low quality frames. A composite image is then created by averaging only the best frames. Performance limits associated with the frame selection technique are investigated in this thesis. Limits imposed by photon noise result in a minimum object brightness of visual magnitude +8 for point sources and +4 for a typical satellite model. Effective average point spread functions for point source and extended objects after frame selection processing are almost identical across a wide range of conditions. This discovery allows the use of deconvolution techniques to sharpen images after using the frame selection technique. A new post-detection processing method, frame weighting, is investigated and may offer some improvement for dim objects during poor atmospheric seeing. Frame selection is demonstrated for the first time on actual imagery from an adaptive optics system. Data analysis indicates that signal-to-noise ratio improvements are degraded for exposure times longer than that allowed to 'freeze' individual realizations of the turbulence effects.

  5. Contrast Adaptation Implies Two Spatiotemporal Channels but Three Adapting Processes

    ERIC Educational Resources Information Center

    Langley, Keith; Bex, Peter J.

    2007-01-01

    The contrast gain control model of adaptation predicts that the effects of contrast adaptation correlate with contrast sensitivity. This article reports that the effects of high contrast spatiotemporal adaptors are maximum when adapting around 19 Hz, which is a factor of two or more greater than the peak in contrast sensitivity. To explain the…

  6. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  7. Image Watermarking Based on Adaptive Models of Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Khawne, Amnach; Hamamoto, Kazuhiko; Chitsobhuk, Orachat

    This paper proposes a digital image watermarking based on adaptive models of human visual perception. The algorithm exploits the local activities estimated from wavelet coefficients of each subband to adaptively control the luminance masking. The adaptive luminance is thus delicately combined with the contrast masking and edge detection and adopted as a visibility threshold. With the proposed combination of adaptive visual sensitivity parameters, the proposed perceptual model can be more appropriate to the different characteristics of various images. The weighting function is chosen such that the fidelity, imperceptibility and robustness could be preserved without making any perceptual difference to the image quality.

  8. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  9. An image processing algorithm for PPCR imaging

    NASA Astrophysics Data System (ADS)

    Cowen, Arnold R.; Giles, Anthony; Davies, Andrew G.; Workman, A.

    1993-09-01

    During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.

  10. Remote sensing image subpixel mapping based on adaptive differential evolution.

    PubMed

    Zhong, Yanfei; Zhang, Liangpei

    2012-10-01

    In this paper, a novel subpixel mapping algorithm based on an adaptive differential evolution (DE) algorithm, namely, adaptive-DE subpixel mapping (ADESM), is developed to perform the subpixel mapping task for remote sensing images. Subpixel mapping may provide a fine-resolution map of class labels from coarser spectral unmixing fraction images, with the assumption of spatial dependence. In ADESM, to utilize DE, the subpixel mapping problem is transformed into an optimization problem by maximizing the spatial dependence index. The traditional DE algorithm is an efficient and powerful population-based stochastic global optimizer in continuous optimization problems, but it cannot be applied to the subpixel mapping problem in a discrete search space. In addition, it is not an easy task to properly set control parameters in DE. To avoid these problems, this paper utilizes an adaptive strategy without user-defined parameters, and a reversible-conversion strategy between continuous space and discrete space, to improve the classical DE algorithm. During the process of evolution, they are further improved by enhanced evolution operators, e.g., mutation, crossover, repair, exchange, insertion, and an effective local search to generate new candidate solutions. Experimental results using different types of remote images show that the ADESM algorithm consistently outperforms the previous subpixel mapping algorithms in all the experiments. Based on sensitivity analysis, ADESM, with its self-adaptive control parameter setting, is better than, or at least comparable to, the standard DE algorithm, when considering the accuracy of subpixel mapping, and hence provides an effective new approach to subpixel mapping for remote sensing imagery.

  11. Coping and adaptation process during puerperium

    PubMed Central

    Muñoz de Rodríguez, Lucy; Ruiz de Cárdenas, Carmen Helena

    2012-01-01

    Introduction: The puerperium is a stage that produces changes and adaptations in women, couples and family. Effective coping, during this stage, depends on the relationship between the demands of stressful or difficult situations and the recourses that the puerperal individual has. Roy (2004), in her Middle Range Theory about the Coping and Adaptation Processing, defines Coping as the ''behavioral and cognitive efforts that a person makes to meet the environment demands''. For the puerperal individual, the correct coping is necessary to maintain her physical and mental well being, especially against situations that can be stressful like breastfeeding and return to work. According to Lazarus and Folkman (1986), a resource for coping is to have someone who receives emotional support, informative and / or tangible. Objective: To review the issue of women coping and adaptation during the puerperium stage and the strategies that enhance this adaptation. Methods: search and selection of database articles: Cochrane, Medline, Ovid, ProQuest, Scielo, and Blackwell Synergy. Other sources: unpublished documents by Roy, published books on Roy´s Model, Websites from of international health organizations. Results: the need to recognize the puerperium as a stage that requires comprehensive care is evident, where nurses must be protagonist with the care offered to women and their families, considering the specific demands of this situation and recourses that promote effective coping and the family, education and health services. PMID:24893059

  12. An adaptive filtered back-projection for photoacoustic image reconstruction

    SciTech Connect

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-05-15

    the correct signal strength of the absorbers. The reconstructed image of the second phantom further demonstrates the capability to form clear images of the spheres with sharp borders in the overlapping geometry. The smallest sphere is clearly visible and distinguishable, even though it is surrounded by two big spheres. In addition, image reconstructions were conducted with randomized noise added to the observed signals to mimic realistic experimental conditions. Conclusions: The authors have developed a new FBP algorithm that is capable for reconstructing high quality images with correct relative intensities and sharp borders for PAT. The results demonstrate that the weighting function serves as a precise ramp filter for processing the observed signals in the Fourier domain. In addition, this algorithm allows an adaptive determination of the cutoff frequency for the applied low pass filter.

  13. An adaptive filtered back-projection for photoacoustic image reconstruction

    PubMed Central

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-01-01

    the correct signal strength of the absorbers. The reconstructed image of the second phantom further demonstrates the capability to form clear images of the spheres with sharp borders in the overlapping geometry. The smallest sphere is clearly visible and distinguishable, even though it is surrounded by two big spheres. In addition, image reconstructions were conducted with randomized noise added to the observed signals to mimic realistic experimental conditions. Conclusions: The authors have developed a new FBP algorithm that is capable for reconstructing high quality images with correct relative intensities and sharp borders for PAT. The results demonstrate that the weighting function serves as a precise ramp filter for processing the observed signals in the Fourier domain. In addition, this algorithm allows an adaptive determination of the cutoff frequency for the applied low pass filter. PMID:25979011

  14. Adaptive memory: animacy processing produces mnemonic advantages.

    PubMed

    VanArsdall, Joshua E; Nairne, James S; Pandeirada, Josefa N S; Blunt, Janell R

    2013-01-01

    It is adaptive to remember animates, particularly animate agents, because they play an important role in survival and reproduction. Yet, surprisingly, the role of animacy in mnemonic processing has received little direct attention in the literature. In two experiments, participants were presented with pronounceable nonwords and properties characteristic of either living (animate) or nonliving (inanimate) things. The task was to rate the likelihood that each nonword-property pair represented a living thing or a nonliving object. In Experiment 1, a subsequent recognition memory test for the nonwords revealed a significant advantage for the nonwords paired with properties of living things. To generalize this finding, Experiment 2 replicated the animate advantage using free recall. These data demonstrate a new phenomenon in the memory literature - a possible mnemonic tuning for animacy - and add to growing data supporting adaptive memory theory. PMID:23261948

  15. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification

  16. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  17. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  18. Acoustic image-processing software

    NASA Astrophysics Data System (ADS)

    Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.

  19. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  20. Optimization of exposure in panoramic radiography while maintaining image quality using adaptive filtering.

    PubMed

    Svenson, Björn; Larsson, Lars; Båth, Magnus

    2016-01-01

    Objective The purpose of the present study was to investigate the potential of using advanced external adaptive image processing for maintaining image quality while reducing exposure in dental panoramic storage phosphor plate (SPP) radiography. Materials and methods Thirty-seven SPP radiographs of a skull phantom were acquired using a Scanora panoramic X-ray machine with various tube load, tube voltage, SPP sensitivity and filtration settings. The radiographs were processed using General Operator Processor (GOP) technology. Fifteen dentists, all within the dental radiology field, compared the structural image quality of each radiograph with a reference image on a 5-point rating scale in a visual grading characteristics (VGC) study. The reference image was acquired with the acquisition parameters commonly used in daily operation (70 kVp, 150 mAs and sensitivity class 200) and processed using the standard process parameters supplied by the modality vendor. Results All GOP-processed images with similar (or higher) dose as the reference image resulted in higher image quality than the reference. All GOP-processed images with similar image quality as the reference image were acquired at a lower dose than the reference. This indicates that the external image processing improved the image quality compared with the standard processing. Regarding acquisition parameters, no strong dependency of the image quality on the radiation quality was seen and the image quality was mainly affected by the dose. Conclusions The present study indicates that advanced external adaptive image processing may be beneficial in panoramic radiography for increasing the image quality of SPP radiographs or for reducing the exposure while maintaining image quality. PMID:26478956

  1. Parameter adaptive estimation of random processes

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Vanlandingham, H. F.

    1975-01-01

    This paper is concerned with the parameter adaptive least squares estimation of random processes. The main result is a general representation theorem for the conditional expectation of a random variable on a product probability space. Using this theorem along with the general likelihood ratio expression, the least squares estimate of the process is found in terms of the parameter conditioned estimates. The stochastic differential for the a posteriori probability and the stochastic differential equation for the a posteriori density are found by using simple stochastic calculus on the representations obtained. The results are specialized to the case when the parameter has a discrete distribution. The results can be used to construct an implementable recursive estimator for certain types of nonlinear filtering problems. This is illustrated by some simple examples.

  2. Adaptive SVD-Based Digital Image Watermarking

    NASA Astrophysics Data System (ADS)

    Shirvanian, Maliheh; Torkamani Azar, Farah

    Digital data utilization along with the increase popularity of the Internet has facilitated information sharing and distribution. However, such applications have also raised concern about copyright issues and unauthorized modification and distribution of digital data. Digital watermarking techniques which are proposed to solve these problems hide some information in digital media and extract it whenever needed to indicate the data owner. In this paper a new method of image watermarking based on singular value decomposition (SVD) of images is proposed which considers human visual system prior to embedding watermark by segmenting the original image into several blocks of different sizes, with more density in the edges of the image. In this way the original image quality is preserved in the watermarked image. Additional advantages of the proposed technique are large capacity of watermark embedding and robustness of the method against different types of image manipulation techniques.

  3. Research on adaptive segmentation and activity classification method of filamentous fungi image in microbe fermentation

    NASA Astrophysics Data System (ADS)

    Cai, Xiaochun; Hu, Yihua; Wang, Peng; Sun, Dujuan; Hu, Guilan

    2009-10-01

    The paper presents an adaptive segmentation and activity classification method for filamentous fungi image. Firstly, an adaptive structuring element (SE) construction algorithm is proposed for image background suppression. Based on watershed transform method, the color labeled segmentation of fungi image is taken. Secondly, the fungi elements feature space is described and the feature set for fungi hyphae activity classification is extracted. The growth rate evaluation of fungi hyphae is achieved by using SVM classifier. Some experimental results demonstrate that the proposed method is effective for filamentous fungi image processing.

  4. Coherent Image Layout using an Adaptive Visual Vocabulary

    SciTech Connect

    Dillard, Scott E.; Henry, Michael J.; Bohn, Shawn J.; Gosink, Luke J.

    2013-03-06

    When querying a huge image database containing millions of images, the result of the query may still contain many thousands of images that need to be presented to the user. We consider the problem of arranging such a large set of images into a visually coherent layout, one that places similar images next to each other. Image similarity is determined using a bag-of-features model, and the layout is constructed from a hierarchical clustering of the image set by mapping an in-order traversal of the hierarchy tree into a space-filling curve. This layout method provides strong locality guarantees so we are able to quantitatively evaluate performance using standard image retrieval benchmarks. Performance of the bag-of-features method is best when the vocabulary is learned on the image set being clustered. Because learning a large, discriminative vocabulary is a computationally demanding task, we present a novel method for efficiently adapting a generic visual vocabulary to a particular dataset. We evaluate our clustering and vocabulary adaptation methods on a variety of image datasets and show that adapting a generic vocabulary to a particular set of images improves performance on both hierarchical clustering and image retrieval tasks.

  5. An adaptive algorithm for low contrast infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi

    2013-08-01

    An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex

  6. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  7. Edge adaptive intra field de-interlacing of video images

    NASA Astrophysics Data System (ADS)

    Lachine, Vladimir; Smith, Gregory; Lee, Louie

    2013-02-01

    Expanding image by an arbitrary scale factor and thereby creating an enlarged image is a crucial image processing operation. De-interlacing is an example of such operation where a video field is enlarged in vertical direction with 1 to 2 scale factor. The most advanced de-interlacing algorithms use a few consequent input fields to generate one output frame. In order to save hardware resources in video processors, missing lines in each field may be generated without reference to the other fields. Line doubling, known as "bobbing", is the simplest intra field de-interlacing method. However, it may generate visual artifacts. For example, interpolation of an inserted line from a few neighboring lines by vertical filter may produce such visual artifacts as "jaggies." In this work we present edge adaptive image up-scaling and/or enhancement algorithm, which can produce "jaggies" free video output frames. As a first step, an edge and its parameters in each interpolated pixel are detected from gradient squared tensor based on local signal variances. Then, according to the edge parameters including orientation, anisotropy and variance strength, the algorithm determines footprint and frequency response of two-dimensional interpolation filter for the output pixel. Filter's coefficients are defined by edge parameters, so that quality of the output frame is controlled by local content. The proposed method may be used for image enlargement or enhancement (for example, anti-aliasing without resampling). It has been hardware implemented in video display processor for intra field de-interlacing of video images.

  8. Signal and Image Processing Operations

    1995-05-10

    VIEW is a software system for processing arbitrary multidimensional signals. It provides facilities for numerical operations, signal displays, and signal databasing. The major emphasis of the system is on the processing of time-sequences and multidimensional images. The system is designed to be both portable and extensible. It runs currently on UNIX systems, primarily SUN workstations.

  9. Towards Adaptive High-Resolution Images Retrieval Schemes

    NASA Astrophysics Data System (ADS)

    Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.

    2016-06-01

    Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.

  10. Body Image Distortion and Exposure to Extreme Body Types: Contingent Adaptation and Cross Adaptation for Self and Other.

    PubMed

    Brooks, Kevin R; Mond, Jonathan M; Stevenson, Richard J; Stephen, Ian D

    2016-01-01

    Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the "thin ideal" has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of "self" and "other" are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one's own body following exposure to "thin" and to "fat" others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size. PMID:27471447

  11. Body Image Distortion and Exposure to Extreme Body Types: Contingent Adaptation and Cross Adaptation for Self and Other

    PubMed Central

    Brooks, Kevin R.; Mond, Jonathan M.; Stevenson, Richard J.; Stephen, Ian D.

    2016-01-01

    Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the “thin ideal” has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of “self” and “other” are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one's own body following exposure to “thin” and to “fat” others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size. PMID:27471447

  12. Body Image Distortion and Exposure to Extreme Body Types: Contingent Adaptation and Cross Adaptation for Self and Other.

    PubMed

    Brooks, Kevin R; Mond, Jonathan M; Stevenson, Richard J; Stephen, Ian D

    2016-01-01

    Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the "thin ideal" has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of "self" and "other" are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one's own body following exposure to "thin" and to "fat" others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size.

  13. Future trends in image processing software and hardware

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1979-01-01

    JPL image processing applications are examined, considering future trends in fields such as planetary exploration, electronics, astronomy, computers, and Landsat. Attention is given to adaptive search and interrogation of large image data bases, the display of multispectral imagery recorded in many spectral channels, merging data acquired by a variety of sensors, and developing custom large scale integrated chips for high speed intelligent image processing user stations and future pipeline production processors.

  14. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-09-25

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  15. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  16. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  17. Information-Adaptive Image Encoding and Restoration

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well oil the test set.

  18. Translational motion compensation in ISAR image processing.

    PubMed

    Wu, H; Grenier, D; Delisle, G Y; Fang, D G

    1995-01-01

    In inverse synthetic aperture radar (ISAR) imaging, the target rotational motion with respect to the radar line of sight contributes to the imaging ability, whereas the translational motion must be compensated out. This paper presents a novel two-step approach to translational motion compensation using an adaptive range tracking method for range bin alignment and a recursive multiple-scatterer algorithm (RMSA) for signal phase compensation. The initial step of RMSA is equivalent to the dominant-scatterer algorithm (DSA). An error-compensating point source is then recursively synthesized from the selected range bins, where each contains a prominent scatterer. Since the clutter-induced phase errors are reduced by phase averaging, the image speckle noise can be reduced significantly. Experimental data processing for a commercial aircraft and computer simulations confirm the validity of the approach.

  19. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  20. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  1. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  2. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging.

    PubMed

    Cua, Michelle; Wahl, Daniel J; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J; Jian, Yifan; Sarunic, Marinko V

    2016-09-07

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems.

  3. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Wahl, Daniel J.; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J.; Jian, Yifan; Sarunic, Marinko V.

    2016-09-01

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems.

  4. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging.

    PubMed

    Cua, Michelle; Wahl, Daniel J; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J; Jian, Yifan; Sarunic, Marinko V

    2016-01-01

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635

  5. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging

    PubMed Central

    Cua, Michelle; Wahl, Daniel J.; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J.; Jian, Yifan; Sarunic, Marinko V.

    2016-01-01

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635

  6. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  7. The Urban Adaptation and Adaptation Process of Urban Migrant Children: A Qualitative Study

    ERIC Educational Resources Information Center

    Liu, Yang; Fang, Xiaoyi; Cai, Rong; Wu, Yang; Zhang, Yaofang

    2009-01-01

    This article employs qualitative research methods to explore the urban adaptation and adaptation processes of Chinese migrant children. Through twenty-one in-depth interviews with migrant children, the researchers discovered: The participant migrant children showed a fairly high level of adaptation to the city; their process of urban adaptation…

  8. Fast-adaptive near-lossless image compression

    NASA Astrophysics Data System (ADS)

    He, Kejing

    2016-05-01

    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  9. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  10. Seismic Imaging Processing and Migration

    2000-06-26

    Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore » code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less

  11. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  12. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  13. Application of adaptive optics in retinal imaging: a quantitative and clinical comparison with standard cameras

    NASA Astrophysics Data System (ADS)

    Barriga, E. S.; Erry, G.; Yang, S.; Russell, S.; Raman, B.; Soliz, P.

    2005-04-01

    Aim: The objective of this project was to evaluate high resolution images from an adaptive optics retinal imager through comparisons with standard film-based and standard digital fundus imagers. Methods: A clinical prototype adaptive optics fundus imager (AOFI) was used to collect retinal images from subjects with various forms of retinopathy to determine whether improved visibility into the disease could be provided to the clinician. The AOFI achieves low-order correction of aberrations through a closed-loop wavefront sensor and an adaptive optics system. The remaining high-order aberrations are removed by direct deconvolution using the point spread function (PSF) or by blind deconvolution when the PSF is not available. An ophthalmologist compared the AOFI images with standard fundus images and provided a clinical evaluation of all the modalities and processing techniques. All images were also analyzed using a quantitative image quality index. Results: This system has been tested on three human subjects (one normal and two with retinopathy). In the diabetic patient vascular abnormalities were detected with the AOFI that cannot be resolved with the standard fundus camera. Very small features, such as the fine vascular structures on the optic disc and the individual nerve fiber bundles are easily resolved by the AOFI. Conclusion: This project demonstrated that adaptive optic images have great potential in providing clinically significant detail of anatomical and pathological structures to the ophthalmologist.

  14. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  15. Adaptive color contrast enhancement for digital images

    NASA Astrophysics Data System (ADS)

    Wang, Yanfang; Luo, Yupin

    2011-11-01

    Noncanonical illumination that is too dim or with color cast induces degenerated images. To cope with this, we propose a method for color-contrast enhancement. First, intensity, chrominance, and contrast characteristics are explored and integrated in the Naka-Rushton equation to remove underexposure and color cast simultaneously. Motivated by the comparison mechanism in Retinex, the ratio of each pixel to its surroundings is utilized to improve image contrast. Finally, inspired by the two color-opponent dimensions in CIELAB space, a color-enhancement strategy is devised based on the transformation from CIEXYZ to CIELAB color space. For images that suffer from underexposure, color cast, or both problems, our algorithm produces promising results without halo artifacts and corruption of uniform areas.

  16. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  17. Linear algebra and image processing

    NASA Astrophysics Data System (ADS)

    Allali, Mohamed

    2010-09-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty.

  18. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  19. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  20. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  1. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  2. Probing the functions of contextual modulation by adapting images rather than observers

    PubMed Central

    Webster, Michael A.

    2014-01-01

    Countless visual aftereffects have illustrated how visual sensitivity and perception can be biased by adaptation to the recent temporal context. This contextual modulation has been proposed to serve a variety of functions, but the actual benefits of adaptation remain uncertain. We describe an approach we have recently developed for exploring these benefits by adapting images instead of observers, to simulate how images should appear under theoretically optimal states of adaptation. This allows the long-term consequences of adaptation to be evaluated in ways that are difficult to probe by adapting observers, and provides a common framework for understanding how visual coding changes when the environment or the observer changes, or for evaluating how the effects of temporal context depend on different models of visual coding or the adaptation processes. The approach is illustrated for the specific case of adaptation to color, for which the initial neural coding and adaptation processes are relatively well understood, but can in principle be applied to examine the consequences of adaptation for any stimulus dimension. A simple calibration that adjusts each neuron’s sensitivity according to the stimulus level it is exposed to is sufficient to normalize visual coding and generate a host of benefits, from increased efficiency to perceptual constancy to enhanced discrimination. This temporal normalization may also provide an important precursor for the effective operation of contextual mechanisms operating across space or feature dimensions. To the extent that the effects of adaptation can be predicted, images from new environments could be “pre-adapted” to match them to the observer, eliminating the need for observers to adapt. PMID:25281412

  3. Contrast-based sensorless adaptive optics for retinal imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  4. Contrast-based sensorless adaptive optics for retinal imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T.O.; He, Zheng; Metha, Andrew

    2015-01-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525

  5. Patient-adaptive reconstruction and acquisition in dynamic imaging with sensitivity encoding (PARADISE).

    PubMed

    Sharif, Behzad; Derbyshire, J Andrew; Faranesh, Anthony Z; Bresler, Yoram

    2010-08-01

    MRI of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional nongated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly accelerated nongated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject's heart rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high-resolution nongated cardiac MRI during short breath-hold. PMID:20665794

  6. Rapid inversion of velocity map images for adaptive femtosecond control

    NASA Astrophysics Data System (ADS)

    Rallis, C.; Andrews, P.; Averin, R.; Jochim, B.; Gregerson, N.; Wells, E.; Zohrabi, M.; de, S.; Gaire, B.; Carnes, K. D.; Ben-Itzhak, I.; Bergues, B.; Kling, M. F.

    2011-05-01

    We report techniques developed to utilize three dimensional momentum information as feedback in adaptive femtosecond control of molecular systems. Velocity map imaging of the dissociating ions following interaction with an intense ultrafast laser pulse provides raw data. In order to recover momentum information, however, the two-dimensional image must be inverted to reconstruct the three-dimensional photofragment distribution. Using a variation of the onion-peeling technique, we invert 1054 × 1040 pixel images in under 1 second. This rapid inversion allows a slice of the momentum distribution to be used as feedback in a closed-loop adaptive control scheme. We report techniques developed to utilize three dimensional momentum information as feedback in adaptive femtosecond control of molecular systems. Velocity map imaging of the dissociating ions following interaction with an intense ultrafast laser pulse provides raw data. In order to recover momentum information, however, the two-dimensional image must be inverted to reconstruct the three-dimensional photofragment distribution. Using a variation of the onion-peeling technique, we invert 1054 × 1040 pixel images in under 1 second. This rapid inversion allows a slice of the momentum distribution to be used as feedback in a closed-loop adaptive control scheme. This work supported by National Science Foundation award PHY-0969687 and the Chemical Sciences, Geosciences, and Biosciences Division, Office of Basic Energy Science, Office of Science, US Department of Energy.

  7. Flood adaptive traits and processes: an overview.

    PubMed

    Voesenek, Laurentius A C J; Bailey-Serres, Julia

    2015-04-01

    Unanticipated flooding challenges plant growth and fitness in natural and agricultural ecosystems. Here we describe mechanisms of developmental plasticity and metabolic modulation that underpin adaptive traits and acclimation responses to waterlogging of root systems and submergence of aerial tissues. This includes insights into processes that enhance ventilation of submerged organs. At the intersection between metabolism and growth, submergence survival strategies have evolved involving an ethylene-driven and gibberellin-enhanced module that regulates growth of submerged organs. Opposing regulation of this pathway is facilitated by a subgroup of ethylene-response transcription factors (ERFs), which include members that require low O₂ or low nitric oxide (NO) conditions for their stabilization. These transcription factors control genes encoding enzymes required for anaerobic metabolism as well as proteins that fine-tune their function in transcription and turnover. Other mechanisms that control metabolism and growth at seed, seedling and mature stages under flooding conditions are reviewed, as well as findings demonstrating that true endurance of submergence includes an ability to restore growth following the deluge. Finally, we highlight molecular insights obtained from natural variation of domesticated and wild species that occupy different hydrological niches, emphasizing the value of understanding natural flooding survival strategies in efforts to stabilize crop yields in flood-prone environments.

  8. Spatially adaptive migration tomography for multistatic GPR imaging

    DOEpatents

    Paglieroni, David W; Beer, N. Reginald

    2013-08-13

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  9. Image processing applications in NDE

    SciTech Connect

    Morris, R.A.

    1980-01-01

    Nondestructive examination (NDE) can be defined as a technique or collection of techniques that permits one to determine some property of a material or object without damaging the object. There are a large number of such techniques and most of them use visual imaging in one form or another. They vary from holographic interferometry where displacements under stress are measured to the visual inspection of an objects surface to detect cracks after penetrant has been applied. The use of image processing techniques on the images produced by NDE is relatively new and can be divided into three general categories: classical image enhancement; mensuration techniques; and quantitative sensitometry. An example is discussed of how image processing techniques are used to nondestructively and destructively test the product throughout its life cycle. The product that will be followed is the microballoon target used in the laser fusion program. The laser target is a small (50 to 100 ..mu..m - dia) glass sphere with typical wall thickness of 0.5 to 6 ..mu..m. The sphere may be used as is or may be given a number of coatings of any number of materials. The beads are mass produced by the millions and the first nondestructive test is to separate the obviously bad beads (broken or incomplete) from the good ones. After this has been done, the good beads must be inspected for spherocity and wall thickness uniformity. The microradiography of the glass, uncoated bead is performed on a specially designed low-energy x-ray machine. The beads are mounted in a special jig and placed on a Kodak high resolution plate in a vacuum chamber that contains the x-ray source. The x-ray image is made with an energy less that 2 keV and the resulting images are then inspected at a magnification of 500 to 1000X. Some typical results are presented.

  10. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  11. Registration of adaptive optics corrected retinal nerve fiber layer (RNFL) images.

    PubMed

    Ramaswamy, Gomathy; Lombardo, Marco; Devaney, Nicholas

    2014-06-01

    Glaucoma is the leading cause of preventable blindness in the western world. Investigation of high-resolution retinal nerve fiber layer (RNFL) images in patients may lead to new indicators of its onset. Adaptive optics (AO) can provide diffraction-limited images of the retina, providing new opportunities for earlier detection of neuroretinal pathologies. However, precise processing is required to correct for three effects in sequences of AO-assisted, flood-illumination images: uneven illumination, residual image motion and image rotation. This processing can be challenging for images of the RNFL due to their low contrast and lack of clearly noticeable features. Here we develop specific processing techniques and show that their application leads to improved image quality on the nerve fiber bundles. This in turn improves the reliability of measures of fiber texture such as the correlation of Gray-Level Co-occurrence Matrix (GLCM).

  12. Registration of adaptive optics corrected retinal nerve fiber layer (RNFL) images

    PubMed Central

    Ramaswamy, Gomathy; Lombardo, Marco; Devaney, Nicholas

    2014-01-01

    Glaucoma is the leading cause of preventable blindness in the western world. Investigation of high-resolution retinal nerve fiber layer (RNFL) images in patients may lead to new indicators of its onset. Adaptive optics (AO) can provide diffraction-limited images of the retina, providing new opportunities for earlier detection of neuroretinal pathologies. However, precise processing is required to correct for three effects in sequences of AO-assisted, flood-illumination images: uneven illumination, residual image motion and image rotation. This processing can be challenging for images of the RNFL due to their low contrast and lack of clearly noticeable features. Here we develop specific processing techniques and show that their application leads to improved image quality on the nerve fiber bundles. This in turn improves the reliability of measures of fiber texture such as the correlation of Gray-Level Co-occurrence Matrix (GLCM). PMID:24940551

  13. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  14. High-speed atomic force microscope imaging: Adaptive multiloop mode

    NASA Astrophysics Data System (ADS)

    Ren, Juan; Zou, Qingze; Li, Bo; Lin, Zhiqun

    2014-07-01

    In this paper, an imaging mode (called the adaptive multiloop mode) of atomic force microscope (AFM) is proposed to substantially increase the speed of tapping mode (TM) imaging while preserving the advantages of TM imaging over contact mode (CM) imaging. Due to its superior image quality and less sample disturbances over CM imaging, particularly for soft materials such as polymers, TM imaging is currently the most widely used imaging technique. The speed of TM imaging, however, is substantially (over an order of magnitude) lower than that of CM imaging, becoming the major bottleneck of this technique. Increasing the speed of TM imaging is challenging as a stable probe tapping on the sample surface must be maintained to preserve the image quality, whereas the probe tapping is rather sensitive to the sample topography variation. As a result, the increase of imaging speed can quickly lead to loss of the probe-sample contact and/or annihilation of the probe tapping, resulting in image distortion and/or sample deformation. The proposed adaptive multiloop mode (AMLM) imaging overcomes these limitations of TM imaging through the following three efforts integrated together: First, it is proposed to account for the variation of the TM deflection when quantifying the sample topography; second, an inner-outer feedback control loop to regulate the TM deflection is added on top of the tapping-feedback control loop to improve the sample topography tracking; and, third, an online iterative feedforward controller is augmented to the whole control system to further enhance the topography tracking, where the next-line sample topography is predicted and utilized to reduce the tracking error. The added feedback regulation of the TM deflection ensures the probe-sample interaction force remains near the minimum for maintaining a stable probe-sample interaction. The proposed AMLM imaging is tested and demonstrated by imaging a poly(tert-butyl acrylate) sample in experiments. The

  15. Performance of the Gemini Planet Imager's adaptive optics system.

    PubMed

    Poyneer, Lisa A; Palmer, David W; Macintosh, Bruce; Savransky, Dmitry; Sadakuni, Naru; Thomas, Sandrine; Véran, Jean-Pierre; Follette, Katherine B; Greenbaum, Alexandra Z; Ammons, S Mark; Bailey, Vanessa P; Bauman, Brian; Cardwell, Andrew; Dillon, Daren; Gavel, Donald; Hartung, Markus; Hibon, Pascale; Perrin, Marshall D; Rantakyrö, Fredrik T; Sivaramakrishnan, Anand; Wang, Jason J

    2016-01-10

    The Gemini Planet Imager's adaptive optics (AO) subsystem was designed specifically to facilitate high-contrast imaging. A definitive description of the system's algorithms and technologies as built is given. 564 AO telemetry measurements from the Gemini Planet Imager Exoplanet Survey campaign are analyzed. The modal gain optimizer tracks changes in atmospheric conditions. Science observations show that image quality can be improved with the use of both the spatially filtered wavefront sensor and linear-quadratic-Gaussian control of vibration. The error budget indicates that for all targets and atmospheric conditions AO bandwidth error is the largest term.

  16. Adaptation of web pages and images for mobile applications

    NASA Astrophysics Data System (ADS)

    Kopf, Stephan; Guthier, Benjamin; Lemelson, Hendrik; Effelsberg, Wolfgang

    2009-02-01

    In this paper, we introduce our new visualization service which presents web pages and images on arbitrary devices with differing display resolutions. We analyze the layout of a web page and simplify its structure and formatting rules. The small screen of a mobile device is used much better this way. Our new image adaptation service combines several techniques. In a first step, border regions which do not contain relevant semantic content are identified. Cropping is used to remove these regions. Attention objects are identified in a second step. We use face detection, text detection and contrast based saliency maps to identify these objects and combine them into a region of interest. Optionally, the seam carving technique can be used to remove inner parts of an image. Additionally, we have developed a software tool to validate, add, delete, or modify all automatically extracted data. This tool also simulates different mobile devices, so that the user gets a feeling of how an adapted web page will look like. We have performed user studies to evaluate our web and image adaptation approach. Questions regarding software ergonomics, quality of the adapted content, and perceived benefit of the adaptation were asked.

  17. Limitations to adaptive optics image quality in rodent eyes.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2012-08-01

    Adaptive optics (AO) retinal image quality of rodent eyes is inferior to that of human eyes, despite the promise of greater numerical aperture. This paradox challenges several assumptions commonly made in AO imaging, assumptions which may be invalidated by the very high power and dioptric thickness of the rodent retina. We used optical modeling to compare the performance of rat and human eyes under conditions that tested the validity of these assumptions. Results showed that AO image quality in the human eye is robust to positioning errors of the AO corrector and to differences in imaging depth and wavelength compared to the wavefront beacon. In contrast, image quality in the rat eye declines sharply with each of these manipulations, especially when imaging off-axis. However, some latitude does exist to offset these manipulations against each other to produce good image quality.

  18. Mesh adaptation technique for Fourier-domain fluorescence lifetime imaging

    SciTech Connect

    Soloviev, Vadim Y.

    2006-11-15

    A novel adaptive mesh technique in the Fourier domain is introduced for problems in fluorescence lifetime imaging. A dynamical adaptation of the three-dimensional scheme based on the finite volume formulation reduces computational time and balances the ill-posed nature of the inverse problem. Light propagation in the medium is modeled by the telegraph equation, while the lifetime reconstruction algorithm is derived from the Fredholm integral equation of the first kind. Stability and computational efficiency of the method are demonstrated by image reconstruction of two spherical fluorescent objects embedded in a tissue phantom.

  19. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  20. Adaptive neural network for image enhancement

    NASA Astrophysics Data System (ADS)

    Perl, Dan; Marsland, T. A.

    1992-09-01

    ANNIE is a neural network that removes noise and sharpens edges in digital images. For noise removal, ANNIE makes a weighted average of the values of the pixels over a certain neighborhood. For edge sharpening, ANNIE detects edges and applies a correction around them. Although averaging is a simple operation and needs only a two-layer neural network, detecting edges is more complex and demands several hidden layers. Based on Marr's theory of natural vision, the edge detection method uses zero-crossings in the image filtered by the ∇2G operator (where ∇2 is the Laplacian operator and G stands for a two- dimensional Gaussian distribution), and uses two channels with different spatial frequencies. Edge detectors are tuned for vertical and horizontal orientations. Lateral inhibition implemented through one-step recursion achieves both edge relaxation and correlation of the two channels. Training by means of the quickprop algorithm determines the shapes of the weighted averaging filter and the edge correction filters, and the rules for edge relaxation and channel interaction. ANNIE uses pairs of pictures as training patterns: one picture is a reference for the output of the network and the same picture deteriorated by noise and/or blur is the input of the network.

  1. Adaptive, predictive controller for optimal process control

    SciTech Connect

    Brown, S.K.; Baum, C.C.; Bowling, P.S.; Buescher, K.L.; Hanagandi, V.M.; Hinde, R.F. Jr.; Jones, R.D.; Parkinson, W.J.

    1995-12-01

    One can derive a model for use in a Model Predictive Controller (MPC) from first principles or from experimental data. Until recently, both methods failed for all but the simplest processes. First principles are almost always incomplete and fitting to experimental data fails for dimensions greater than one as well as for non-linear cases. Several authors have suggested the use of a neural network to fit the experimental data to a multi-dimensional and/or non-linear model. Most networks, however, use simple sigmoid functions and backpropagation for fitting. Training of these networks generally requires large amounts of data and, consequently, very long training times. In 1993 we reported on the tuning and optimization of a negative ion source using a special neural network[2]. One of the properties of this network (CNLSnet), a modified radial basis function network, is that it is able to fit data with few basis functions. Another is that its training is linear resulting in guaranteed convergence and rapid training. We found the training to be rapid enough to support real-time control. This work has been extended to incorporate this network into an MPC using the model built by the network for predictive control. This controller has shown some remarkable capabilities in such non-linear applications as continuous stirred exothermic tank reactors and high-purity fractional distillation columns[3]. The controller is able not only to build an appropriate model from operating data but also to thin the network continuously so that the model adapts to changing plant conditions. The controller is discussed as well as its possible use in various of the difficult control problems that face this community.

  2. Adaptive optics technology for high-resolution retinal imaging.

    PubMed

    Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

    2012-12-27

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging.

  3. Adaptive Optics Technology for High-Resolution Retinal Imaging

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

    2013-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging. PMID:23271600

  4. Locally adaptive bilateral clustering for universal image denoising

    NASA Astrophysics Data System (ADS)

    Toh, K. K. V.; Mat Isa, N. A.

    2012-12-01

    This paper presents a novel and efficient locally adaptive denoising method based on clustering of pixels into regions of similar geometric and radiometric structures. Clustering is performed by adaptively segmenting pixels in the local kernel based on their augmented variational series. Then, noise pixels are restored by selectively considering the radiometric and spatial properties of every pixel in the formed clusters. The proposed method is exceedingly robust in conveying reliable local structural information even in the presence of noise. As a result, the proposed method substantially outperforms other state-of-the-art methods in terms of image restoration and computational cost. We support our claims with ample simulated and real data experiments. The relatively fast runtime from extensive simulations also suggests that the proposed method is suitable for a variety of image-based products — either embedded in image capturing devices or applied as image enhancement software.

  5. [Cyclic interactions in the processes of adaptation regulation].

    PubMed

    Vasilevskiĭ, N N; Aleksandrova, Zh G; Suvorov, N B

    1989-01-01

    Human adaptation is characterised by essential changes of functional systems biorhythms which appears in the changes of their components' sequence and in the dynamics of biorhythmological cycles. These objective laws, having been described for human EEG, allow to discern clearly the individual-typological peculiarities in man with different stages of adaptation, the same as the adaptive shifts during long-term influence of external factors. The cyclic course of adaptative processes is regarded as a measure of adaptability. With the help of biorhythmic multitude the memory is constantly satiated from the brain by discrete portions of adaptogenic information, which prevents the natural processes of the memory disintegration. PMID:2816008

  6. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  7. Quantitative genetic study of the adaptive process.

    PubMed

    Shaw, R G; Shaw, F H

    2014-01-01

    The additive genetic variance with respect to absolute fitness, VA(W), divided by mean absolute fitness, , sets the rate of ongoing adaptation. Fisher's key insight yielding this quantitative prediction of adaptive evolution, known as the Fundamental Theorem of Natural Selection, is well appreciated by evolutionists. Nevertheless, extremely scant information about VA(W) is available for natural populations. Consequently, the capacity for fitness increase via natural selection is unknown. Particularly in the current context of rapid environmental change, which is likely to reduce fitness directly and, consequently, the size and persistence of populations, the urgency of advancing understanding of immediate adaptive capacity is extreme. We here explore reasons for the dearth of empirical information about VA(W), despite its theoretical renown and critical evolutionary role. Of these reasons, we suggest that expectations that VA(W) is negligible, in general, together with severe statistical challenges of estimating it, may largely account for the limited empirical emphasis on it. To develop insight into the dynamics of VA(W) in a changing environment, we have conducted individual-based genetically explicit simulations. We show that, as optimizing selection on a trait changes steadily over generations, VA(W) can grow considerably, supporting more rapid adaptation than would the VA(W) of the base population. We call for direct evaluation of VA(W) and in support of prediction of rates adaptive evolution, and we advocate for the use of aster modeling as a rigorous basis for achieving this goal.

  8. High-resolution adaptive imaging with a single photodiode

    PubMed Central

    Soldevila, F.; Salvador-Balaguer, E.; Clemente, P.; Tajahuerce, E.; Lancis, J.

    2015-01-01

    During the past few years, the emergence of spatial light modulators operating at the tens of kHz has enabled new imaging modalities based on single-pixel photodetectors. The nature of single-pixel imaging enforces a reciprocal relationship between frame rate and image size. Compressive imaging methods allow images to be reconstructed from a number of projections that is only a fraction of the number of pixels. In microscopy, single-pixel imaging is capable of producing images with a moderate size of 128 × 128 pixels at frame rates under one Hz. Recently, there has been considerable interest in the development of advanced techniques for high-resolution real-time operation in applications such as biological microscopy. Here, we introduce an adaptive compressive technique based on wavelet trees within this framework. In our adaptive approach, the resolution of the projecting patterns remains deliberately small, which is crucial to avoid the demanding memory requirements of compressive sensing algorithms. At pattern projection rates of 22.7 kHz, our technique would enable to obtain 128 × 128 pixel images at frame rates around 3 Hz. In our experiments, we have demonstrated a cost-effective solution employing a commercial projection display. PMID:26382114

  9. High-resolution adaptive imaging with a single photodiode

    NASA Astrophysics Data System (ADS)

    Soldevila, F.; Salvador-Balaguer, E.; Clemente, P.; Tajahuerce, E.; Lancis, J.

    2015-09-01

    During the past few years, the emergence of spatial light modulators operating at the tens of kHz has enabled new imaging modalities based on single-pixel photodetectors. The nature of single-pixel imaging enforces a reciprocal relationship between frame rate and image size. Compressive imaging methods allow images to be reconstructed from a number of projections that is only a fraction of the number of pixels. In microscopy, single-pixel imaging is capable of producing images with a moderate size of 128 × 128 pixels at frame rates under one Hz. Recently, there has been considerable interest in the development of advanced techniques for high-resolution real-time operation in applications such as biological microscopy. Here, we introduce an adaptive compressive technique based on wavelet trees within this framework. In our adaptive approach, the resolution of the projecting patterns remains deliberately small, which is crucial to avoid the demanding memory requirements of compressive sensing algorithms. At pattern projection rates of 22.7 kHz, our technique would enable to obtain 128 × 128 pixel images at frame rates around 3 Hz. In our experiments, we have demonstrated a cost-effective solution employing a commercial projection display.

  10. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  11. eXtreme Adaptive Optics Planet Imager: Overview and status

    SciTech Connect

    Macintosh, B A; Bauman, B; Evans, J W; Graham, J; Lockwood, C; Poyneer, L; Dillon, D; Gavel, D; Green, J; Lloyd, J; Makidon, R; Olivier, S; Palmer, D; Perrin, M; Severson, S; Sheinis, A; Sivaramakrishnan, A; Sommargren, G; Soumer, R; Troy, M; Wallace, K; Wishnow, E

    2004-08-18

    As adaptive optics (AO) matures, it becomes possible to envision AO systems oriented towards specific important scientific goals rather than general-purpose systems. One such goal for the next decade is the direct imaging detection of extrasolar planets. An 'extreme' adaptive optics (ExAO) system optimized for extrasolar planet detection will have very high actuator counts and rapid update rates - designed for observations of bright stars - and will require exquisite internal calibration at the nanometer level. In addition to extrasolar planet detection, such a system will be capable of characterizing dust disks around young or mature stars, outflows from evolved stars, and high Strehl ratio imaging even at visible wavelengths. The NSF Center for Adaptive Optics has carried out a detailed conceptual design study for such an instrument, dubbed the eXtreme Adaptive Optics Planet Imager or XAOPI. XAOPI is a 4096-actuator AO system, notionally for the Keck telescope, capable of achieving contrast ratios >10{sup 7} at angular separations of 0.2-1'. ExAO system performance analysis is quite different than conventional AO systems - the spatial and temporal frequency content of wavefront error sources is as critical as their magnitude. We present here an overview of the XAOPI project, and an error budget highlighting the key areas determining achievable contrast. The most challenging requirement is for residual static errors to be less than 2 nm over the controlled range of spatial frequencies. If this can be achieved, direct imaging of extrasolar planets will be feasible within this decade.

  12. Multispectral image processing: the nature factor

    NASA Astrophysics Data System (ADS)

    Watkins, Wendell R.

    1998-09-01

    The images processed by our brain represent our window into the world. For some animals this window is derived from a single eye, for others, including humans, two eyes provide stereo imagery, for others like the black widow spider several eyes are used (8 eyes), and some insects like the common housefly utilize thousands of eyes (ommatidia). Still other animals like the bat and dolphin have eyes for regular vision, but employ acoustic sonar vision for seeing where their regular eyes don't work such as in pitch black caves or turbid water. Of course, other animals have adapted to dark environments by bringing along their own lighting such as the firefly and several creates from the depths of the ocean floor. Animal vision is truly varied and has developed over millennia in many remarkable ways. We have learned a lot about vision processes by studying these animal systems and can still learn even more.

  13. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  14. Adaptation with a stabilized retinal image: effect of luminance and contrast.

    PubMed

    Olson, J D; Tulunay-Keesey, U; Saleh, B E

    1994-11-01

    The addition of a uniform increment of luminance (L) to a faded retinally-stabilized target results in the subjective reappearance of the image with contrast opposite to that of the target. This phenomenon, called apparent phase reversal (APR), reveals a nonlinear gain mechanism in the adaptation process. The magnitude of the threshold increment to elicit APR (Lapr) is a measure of the state of stabilized adaptation. In the experiments reported here, Lapr was studied as a function of background luminance (Lo) and contrast (m) of the adapting stimulus. It was found that Lapr increases with increasing Lo, but does not depend on m. The data are analyzed within the context of a previously proposed model of stabilized image fading consisting of a multiplicative inverse gain followed by a subtractive process. It was found that the addition of a contrast processing stage was required to account for the relationship between Lapr and m.

  15. Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System

    PubMed Central

    Hosseini, Monireh Sheikh; Zekri, Maryam

    2012-01-01

    Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054

  16. A systematic process for adaptive concept exploration

    NASA Astrophysics Data System (ADS)

    Nixon, Janel Nicole

    several common challenges to the creation of quantitative modeling and simulation environments. Namely, a greater number of alternative solutions imply a greater number of design variables as well as larger ranges on those variables. This translates to a high-dimension combinatorial problem. As the size and dimensionality of the solution space gets larger, the number of physically impossible solutions within that space greatly increases. Thus, the ratio of feasible design space to infeasible space decreases, making it much harder to not only obtain a good quantitative sample of the space, but to also make sense of that data. This is especially the case in the early stages of design, where it is not practical to dedicate a great deal of resources to performing thorough, high-fidelity analyses on all the potential solutions. To make quantitative analyses feasible in these early stages of design, a method is needed that allows for a relatively sparse set of information to be collected quickly and efficiently, and yet, that information needs to be meaningful enough with which to base a decision. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The SPACE method uses a four-part sampling scheme to efficiently uncover the parametric relationships between the design variables and responses. Step 1 aims to identify the location of infeasible space within the region of interest using an initial

  17. Digital adaptive optics line-scanning confocal imaging system.

    PubMed

    Liu, Changgeng; Kim, Myung K

    2015-01-01

    A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea.

  18. Adaptive optics with pupil tracking for high resolution retinal imaging.

    PubMed

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-02-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.

  19. Digital adaptive optics line-scanning confocal imaging system

    PubMed Central

    Liu, Changgeng; Kim, Myung K.

    2015-01-01

    Abstract. A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea. PMID:26140334

  20. Digital adaptive optics line-scanning confocal imaging system.

    PubMed

    Liu, Changgeng; Kim, Myung K

    2015-01-01

    A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea. PMID:26140334

  1. Digital adaptive optics line-scanning confocal imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Changgeng; Kim, Myung K.

    2015-11-01

    A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack-Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea.

  2. Precision Imaging with Adaptive Optics Aperture Masking Interferometry

    NASA Astrophysics Data System (ADS)

    Martinache, F.; Lloyd, J. P.; Tuthill, P.; Woodruff, H. C.; ten Brummelaar, T.; Turner, N.

    2005-12-01

    Adaptive Optics (AO) enables sensitive diffraction limited imaging from the ground on large telescopes. Much of the promise of AO has yet to be fully realised, due to the difficulties imposed by the complicated, unstable and unknown PSF. At the highest resolutions, (inside the PSF) AO has yet to demonstrate full potential for improvements over speckle techniques. The most precise astronomical speckle imaging observations have resulted from non-redundant pupil masking. We are developing a technique to solve the problem of PSF characterization in AO imaging by synthesizing the heritage of image reconstruction with sparse pupil sampling from astronomical interferometry with the long coherence times available after AO correction. Masking the output pupil of the AO system with a non-redundant array can provide self-calibrated imaging. Further calibration of the MTF can be provided with AO wavefront sensor telemetry data. With a precision calibrated PSF, reliable, well-posed deconvolution is possible. High SNR data and accurate MTF calibration provided by the combination of non-redundant masking and AO system telemetry, allow super-resolution. AEOS provides a unique capability to explore the dynamic range and imaging precision of this technique at visible wavelengths. The NSF/AFOSR program has funded an instrument to explore these new imaging techniques at AEOS. ZOR/AO (Zero Optical Redundance with Adaptive Optics) is presently under construction, to be deployed at AEOS in 2005.

  3. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance.

  4. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance. PMID:25321679

  5. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  6. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed.

  7. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  8. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  9. Augmenting synthetic aperture radar with space time adaptive processing

    NASA Astrophysics Data System (ADS)

    Riedl, Michael; Potter, Lee C.; Ertin, Emre

    2013-05-01

    Wide-area persistent radar video offers the ability to track moving targets. A shortcoming of the current technology is an inability to maintain track when Doppler shift places moving target returns co-located with strong clutter. Further, the high down-link data rate required for wide-area imaging presents a stringent system bottleneck. We present a multi-channel approach to augment the synthetic aperture radar (SAR) modality with space time adaptive processing (STAP) while constraining the down-link data rate to that of a single antenna SAR system. To this end, we adopt a multiple transmit, single receive (MISO) architecture. A frequency division design for orthogonal transmit waveforms is presented; the approach maintains coherence on clutter, achieves the maximal unaliased band of radial velocities, retains full resolution SAR images, and requires no increase in receiver data rate vis-a-vis the wide-area SAR modality. For Nt transmit antennas and N samples per pulse, the enhanced sensing provides a STAP capability with Nt times larger range bins than the SAR mode, at the cost of O(log N) more computations per pulse. The proposed MISO system and the associated signal processing are detailed, and the approach is numerically demonstrated via simulation of an airborne X-band system.

  10. Light sheet adaptive optics microscope for 3D live imaging

    NASA Astrophysics Data System (ADS)

    Bourgenot, C.; Taylor, J. M.; Saunter, C. D.; Girkin, J. M.; Love, G. D.

    2013-02-01

    We report on the incorporation of adaptive optics (AO) into the imaging arm of a selective plane illumination microscope (SPIM). SPIM has recently emerged as an important tool for life science research due to its ability to deliver high-speed, optically sectioned, time-lapse microscope images from deep within in vivo selected samples. SPIM provides a very interesting system for the incorporation of AO as the illumination and imaging paths are decoupled and AO may be useful in both paths. In this paper, we will report the use of AO applied to the imaging path of a SPIM, demonstrating significant improvement in image quality of a live GFP-labeled transgenic zebrafish embryo heart using a modal, wavefront sensorless approach and a heart synchronization method. These experimental results are linked to a computational model showing that significant aberrations are produced by the tube holding the sample in addition to the aberration from the biological sample itself.

  11. Imaging of retinal vasculature using adaptive optics SLO/OCT.

    PubMed

    Felberer, Franz; Rechenmacher, Matthias; Haindl, Richard; Baumann, Bernhard; Hitzenberger, Christoph K; Pircher, Michael

    2015-04-01

    We use our previously developed adaptive optics (AO) scanning laser ophthalmoscope (SLO)/ optical coherence tomography (OCT) instrument to investigate its capability for imaging retinal vasculature. The system records SLO and OCT images simultaneously with a pixel to pixel correspondence which allows a direct comparison between those imaging modalities. Different field of views ranging from 0.8°x0.8° up to 4°x4° are supported by the instrument. In addition a dynamic focus scheme was developed for the AO-SLO/OCT system in order to maintain the high transverse resolution throughout imaging depth. The active axial eye tracking that is implemented in the OCT channel allows time resolved measurements of the retinal vasculature in the en-face imaging plane. Vessel walls and structures that we believe correspond to individual erythrocytes could be visualized with the system.

  12. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  13. Techniques for radar imaging using a wideband adaptive array

    NASA Astrophysics Data System (ADS)

    Curry, Mark Andrew

    A microwave imaging approach is simulated and validated experimentally that uses a small, wideband adaptive array. The experimental 12-element linear array and microwave receiver uses stepped frequency CW signals from 2--3 GHz and receives backscattered energy from short range objects in a +/-90° field of view. Discone antenna elements are used due to their wide temporal bandwidth, isotropic azimuth beam pattern and fixed phase center. It is also shown that these antennas have very low mutual coupling, which significantly reduces the calibration requirements. The MUSIC spectrum is used as a calibration tool. Spatial resampling is used to correct the dispersion effects, which if not compensated causes severe reduction in detection and resolution for medium and large off-axis angles. Fourier processing provides range resolution and the minimum variance spectral estimate is employed to resolve constant range targets for improved angular resolution. Spatial smoothing techniques are used to generate signal plus interference covariance matrices at each range bin. Clutter affects the angular resolution of the array due to the increase in rank of the signal plus clutter covariance matrix, whereas at the same time the rank of this matrix is reduced for closely spaced scatterers due to signal coherence. A method is proposed to enhance angular resolution in the presence of clutter by an approximate signal subspace projection (ASSP) that maps the received signal space to a lower effective rank approximation. This projection operator has a scalar control parameter that is a function of the signal and clutter amplitude estimates. These operations are accomplished without using eigendecomposition. The low sidelobe levels allow the imaging of the integrated backscattering from the absorber cones in the chamber. This creates a fairly large clutter signature for testing ASSP. We can easily resolve 2 dihedrals placed at about 70% of a beamwidth apart, with a signal to clutter ratio

  14. Bayer patterned high dynamic range image reconstruction using adaptive weighting function

    NASA Astrophysics Data System (ADS)

    Kang, Hee; Lee, Suk Ho; Song, Ki Sun; Kang, Moon Gi

    2014-12-01

    It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.

  15. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  16. Incorporating Adaptive Local Information Into Fuzzy Clustering for Image Segmentation.

    PubMed

    Liu, Guoying; Zhang, Yun; Wang, Aimin

    2015-11-01

    Fuzzy c-means (FCM) clustering with spatial constraints has attracted great attention in the field of image segmentation. However, most of the popular techniques fail to resolve misclassification problems due to the inaccuracy of their spatial models. This paper presents a new unsupervised FCM-based image segmentation method by paying closer attention to the selection of local information. In this method, region-level local information is incorporated into the fuzzy clustering procedure to adaptively control the range and strength of interactive pixels. First, a novel dissimilarity function is established by combining region-based and pixel-based distance functions together, in order to enhance the relationship between pixels which have similar local characteristics. Second, a novel prior probability function is developed by integrating the differences between neighboring regions into the mean template of the fuzzy membership function, which adaptively selects local spatial constraints by a tradeoff weight depending upon whether a pixel belongs to a homogeneous region or not. Through incorporating region-based information into the spatial constraints, the proposed method strengthens the interactions between pixels within the same region and prevents over smoothing across region boundaries. Experimental results over synthetic noise images, natural color images, and synthetic aperture radar images show that the proposed method achieves more accurate segmentation results, compared with five state-of-the-art image segmentation methods.

  17. Image Super-Resolution via Adaptive Regularization and Sparse Representation.

    PubMed

    Cao, Feilong; Cai, Miaomiao; Tan, Yuanpeng; Zhao, Jianwei

    2016-07-01

    Previous studies have shown that image patches can be well represented as a sparse linear combination of elements from an appropriately selected over-complete dictionary. Recently, single-image super-resolution (SISR) via sparse representation using blurred and downsampled low-resolution images has attracted increasing interest, where the aim is to obtain the coefficients for sparse representation by solving an l0 or l1 norm optimization problem. The l0 optimization is a nonconvex and NP-hard problem, while the l1 optimization usually requires many more measurements and presents new challenges even when the image is the usual size, so we propose a new approach for SISR recovery based on regularization nonconvex optimization. The proposed approach is potentially a powerful method for recovering SISR via sparse representations, and it can yield a sparser solution than the l1 regularization method. We also consider the best choice for lp regularization with all p in (0, 1), where we propose a scheme that adaptively selects the norm value for each image patch. In addition, we provide a method for estimating the best value of the regularization parameter λ adaptively, and we discuss an alternate iteration method for selecting p and λ . We perform experiments, which demonstrates that the proposed regularization nonconvex optimization method can outperform the convex optimization method and generate higher quality images.

  18. Adapting smartphones for low-cost optical medical imaging

    NASA Astrophysics Data System (ADS)

    Pratavieira, Sebastião.; Vollet-Filho, José D.; Carbinatto, Fernanda M.; Blanco, Kate; Inada, Natalia M.; Bagnato, Vanderlei S.; Kurachi, Cristina

    2015-06-01

    Optical images have been used in several medical situations to improve diagnosis of lesions or to monitor treatments. However, most systems employ expensive scientific (CCD or CMOS) cameras and need computers to display and save the images, usually resulting in a high final cost for the system. Additionally, this sort of apparatus operation usually becomes more complex, requiring more and more specialized technical knowledge from the operator. Currently, the number of people using smartphone-like devices with built-in high quality cameras is increasing, which might allow using such devices as an efficient, lower cost, portable imaging system for medical applications. Thus, we aim to develop methods of adaptation of those devices to optical medical imaging techniques, such as fluorescence. Particularly, smartphones covers were adapted to connect a smartphone-like device to widefield fluorescence imaging systems. These systems were used to detect lesions in different tissues, such as cervix and mouth/throat mucosa, and to monitor ALA-induced protoporphyrin-IX formation for photodynamic treatment of Cervical Intraepithelial Neoplasia. This approach may contribute significantly to low-cost, portable and simple clinical optical imaging collection.

  19. Optimal imaging with adaptive mesh refinement in electrical impedance tomography.

    PubMed

    Molinari, Marc; Blott, Barry H; Cox, Simon J; Daniell, Geoffrey J

    2002-02-01

    In non-linear electrical impedance tomography the goodness of fit of the trial images is assessed by the well-established statistical chi2 criterion applied to the measured and predicted datasets. Further selection from the range of images that fit the data is effected by imposing an explicit constraint on the form of the image, such as the minimization of the image gradients. In particular, the logarithm of the image gradients is chosen so that conductive and resistive deviations are treated in the same way. In this paper we introduce the idea of adaptive mesh refinement to the 2D problem so that the local scale of the mesh is always matched to the scale of the image structures. This improves the reconstruction resolution so that the image constraint adopted dominates and is not perturbed by the mesh discretization. The avoidance of unnecessary mesh elements optimizes the speed of reconstruction without degrading the resulting images. Starting with a mesh scale length of the order of the electrode separation it is shown that, for data obtained at presently achievable signal-to-noise ratios of 60 to 80 dB, one or two refinement stages are sufficient to generate high quality images.

  20. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  1. Optimization and application of Retinex algorithm in aerial image processing

    NASA Astrophysics Data System (ADS)

    Sun, Bo; He, Jun; Li, Hongyu

    2008-04-01

    In this paper, we provide a segmentation based Retinex for improving the visual quality of aerial images obtained under complex weather conditions. With the method, an aerial image will be segmented into different regions, and then an adaptive Gaussian based on the segmentations will be used to process it. The method addresses the problems existing in previously developed Retinex algorithms, such as halo artifacts and graying-out artifacts. The experimental result also shows evidence of its better effect.

  2. Widefield multiphoton microscopy with image-based adaptive optics

    NASA Astrophysics Data System (ADS)

    Chang, C.-Y.; Cheng, L.-C.; Su, H.-W.; Yen, W.-C.; Chen, S.-J.

    2012-10-01

    Unlike conventional multiphoton microscopy according to pixel by pixel point scanning, a widefield multiphoton microscope based on spatiotemporal focusing has been developed to provide fast optical sectioning images at a frame rate over 100 Hz. In order to overcome the aberrations of the widefield multiphoton microscope and the wavefront distortion from turbid biospecimens, an image-based adaptive optics system (AOS) was integrated. The feedback control signal of the AOS was acquired according to locally maximize image intensity, which were provided by the widefield multiphoton excited microscope, by using a hill climbing algorithm. Then, the control signal was utilized to drive a deformable mirror in such a way as to eliminate the aberration and distortion. A R6G-doped PMMA thin film is also increased by 3.7-fold. Furthermore, the TPEF image quality of 1 μm fluorescent beads sealed in agarose gel at different depths is improved.

  3. New CCD imagers for adaptive optics wavefront sensors

    NASA Astrophysics Data System (ADS)

    Schuette, Daniel R.; Reich, Robert K.; Prigozhin, Ilya; Burke, Barry E.; Johnson, Robert

    2014-08-01

    We report on two recently developed charge-coupled devices (CCDs) for adaptive optics wavefront sensing, both designed to provide exceptional sensitivity (low noise and high quantum efficiency) in high-frame-rate low-latency readout applications. The first imager, the CCID75, is a back-illuminated 16-port 160×160-pixel CCD that has been demonstrated to operate at frame rates above 1,300 fps with noise of < 3 e-. We will describe the architecture of this CCD that enables this level of performance, present and discuss characterization data, and review additional design features that enable unique operating modes for adaptive optics wavefront sensing. We will also present an architectural overview and initial characterization data of a recently designed variation on the CCID75 architecture, the CCID82, which incorporates an electronic shutter to support adaptive optics using Rayleigh beacons.

  4. Adaptive Processes in Thalamus and Cortex Revealed by Silencing of Primary Visual Cortex during Contrast Adaptation.

    PubMed

    King, Jillian L; Lowe, Matthew P; Stover, Kurt R; Wong, Aimee A; Crowder, Nathan A

    2016-05-23

    Visual adaptation illusions indicate that our perception is influenced not only by the current stimulus but also by what we have seen in the recent past. Adaptation to stimulus contrast (the relative luminance created by edges or contours in a scene) induces the perception of the stimulus fading away and increases the contrast detection threshold in psychophysical tests [1, 2]. Neural correlates of contrast adaptation have been described throughout the visual system including the retina [3], dorsal lateral geniculate nucleus (dLGN) [4, 5], primary visual cortex (V1) [6], and parietal cortex [7]. The apparent ubiquity of adaptation at all stages raises the question of how this process cascades across brain regions [8]. Focusing on V1, adaptation could be inherited from pre-cortical stages, arise from synaptic depression at the thalamo-cortical synapse [9], or develop locally, but what is the weighting of these contributions? Because contrast adaptation in mouse V1 is similar to classical animal models [10, 11], we took advantage of the optogenetic tools available in mice to disentangle the processes contributing to adaptation in V1. We disrupted cortical adaptation by optogenetically silencing V1 and found that adaptation measured in V1 now resembled that observed in dLGN. Thus, the majority of adaptation seen in V1 neurons arises through local activity-dependent processes, with smaller contributions from dLGN inheritance and synaptic depression at the thalamo-cortical synapse. Furthermore, modeling indicates that divisive scaling of the weakly adapted dLGN input can predict some of the emerging features of V1 adaptation.

  5. Adaptive sigmoid function bihistogram equalization for image contrast enhancement

    NASA Astrophysics Data System (ADS)

    Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe

    2015-09-01

    Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.

  6. Adaptive Robotic Welding Using A Rapid Image Pre-Processor

    NASA Astrophysics Data System (ADS)

    Dufour, M.; Begin, G.

    1984-02-01

    The rapid pre-processor initially developed by NRCC and Leigh Instruments Inc. as part of the visual aid system of the space shuttle arm 1 has been adapted to perform real time seam tracking of multipass butt weld and other adaptive welding functions. The weld preparation profile is first enhanced by a projected laser target formed by a line and dots. A standard TV camera is used to observe the target image at an angle. Displacement and distorsion of the target image on a monitor are simple functions of the preparation surface distance and shape respectively. Using the video signal, the pre-processor computes in real time the area and first moments of the white level figure contained within four independent rectangular windows in the field of view of the camera. The shape, size, and position of each window can be changed dynamically for each successive image at the standard 30 images/sec rate, in order to track some target image singularities. Visual sensing and welding are done simultaneously. As an example, it is shown that thin sheet metal welding can be automated using a single window for seam tracking, gap width measurement and torch height estimation. Using a second window, measurement of sheet misalignment and their orientation in space were also achieved. The system can be used at welding speed of up to 1 m/min. Simplicity, speed and effectiveness are the main advantages of this system.

  7. Adaptive Constructive Processes and the Future of Memory

    ERIC Educational Resources Information Center

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…

  8. Adaptive Noise Suppression Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Kozel, David; Nelson, Richard

    1996-01-01

    A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.

  9. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography.

    PubMed

    Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V

    2015-02-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.

  10. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography

    PubMed Central

    Wong, Kevin S. K.; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J.; Sarunic, Marinko V.

    2015-01-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation. PMID:25780747

  11. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  12. Adaptive Memory: Is Survival Processing Special?

    ERIC Educational Resources Information Center

    Nairne, James S.; Pandeirada, Josefa N. S.

    2008-01-01

    Do the operating characteristics of memory continue to bear the imprints of ancestral selection pressures? Previous work in our laboratory has shown that human memory may be specially tuned to retain information processed in terms of its survival relevance. A few seconds of survival processing in an incidental learning context can produce recall…

  13. Integration of AdaptiSPECT, a small-animal adaptive SPECT imaging system

    PubMed Central

    Chaix, Cécile; Kovalsky, Stephen; Kosmider, Matthew; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    AdaptiSPECT is a pre-clinical adaptive SPECT imaging system under final development at the Center for Gamma-ray Imaging. The system incorporates multiple adaptive features: an adaptive aperture, 16 detectors mounted on translational stages, and the ability to switch between a non-multiplexed and a multiplexed imaging configuration. In this paper, we review the design of AdaptiSPECT and its adaptive features. We then describe the on-going integration of the imaging system. PMID:26347197

  14. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  15. Multimodal Medical Image Fusion by Adaptive Manifold Filter.

    PubMed

    Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images. PMID:26664494

  16. Adaptive SPECT imaging with crossed-slit apertures

    PubMed Central

    Durko, Heather L.; Furenlid, Lars R.

    2015-01-01

    Preclinical single-photon emission computed tomography (SPECT) is an essential tool for studying the progression, response to treatment, and physiological changes in small animal models of human disease. The wide range of imaging applications is often limited by the static design of many preclinical SPECT systems. We have developed a prototype imaging system that replaces the standard static pinhole aperture with two sets of movable, keel-edged copper-tungsten blades configured as crossed (skewed) slits. These apertures can be positioned independently between the object and detector, producing a continuum of imaging configurations in which the axial and transaxial magnifications are not constrained to be equal. We incorporated a megapixel silicon double-sided strip detector to permit ultrahigh-resolution imaging. We describe the configuration of the adjustable slit aperture imaging system and discuss its application toward adaptive imaging, and reconstruction techniques using an accurate imaging forward model, a novel geometric calibration technique, and a GPU-based ultra-high-resolution reconstruction code. PMID:26190884

  17. Improvement in DMSA imaging using adaptive noise reduction: an ROC analysis.

    PubMed

    Lorimer, Lisa; Gemmell, Howard G; Sharp, Peter F; McKiddie, Fergus I; Staff, Roger T

    2012-11-01

    Dimercaptosuccinic acid imaging is the 'gold standard' for the detection of cortical defects and diagnosis of scarring of the kidneys. The Siemens planar processing package, which implements adaptive noise reduction using the Pixon algorithm, is designed to allow a reduction in image noise, enabling improved image quality and reduced acquisition time/injected activity. This study aimed to establish the level of improvement in image quality achievable using this algorithm. Images were acquired of a phantom simulating a single kidney with a range of defects of varying sizes, positions and contrasts. These images were processed using the Pixon processing software and shown to 12 observers (six experienced and six novices) who were asked to rate the images on a six-point scale depending on their confidence that a defect was present. The data were analysed using a receiver operating characteristic approach. Results showed that processed images significantly improved the performance of the experienced observers in terms of their sensitivity and specificity. Although novice observers showed significant increase in sensitivity when using the software, a significant decrease in specificity was also seen. This study concludes that the Pixon software can be used to improve the assessment of cortical defects in dimercaptosuccinic acid imaging by suitably trained observers.

  18. Assessment of vessel diameters for MR brain angiography processed images

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian-Dragos; Moldovanu, Simona

    2015-12-01

    The motivation was to develop an assessment method to measure (in)visible differences between the original and the processed images in MR brain angiography as a method of evaluation of the status of the vessel segments (i.e. the existence of the occlusion or intracerebral vessels damaged as aneurysms). Generally, the image quality is limited, so we improve the performance of the evaluation through digital image processing. The goal is to determine the best processing method that allows an accurate assessment of patients with cerebrovascular diseases. A total of 10 MR brain angiography images were processed by the following techniques: histogram equalization, Wiener filter, linear contrast adjustment, contrastlimited adaptive histogram equalization, bias correction and Marr-Hildreth filter. Each original image and their processed images were analyzed into the stacking procedure so that the same vessel and its corresponding diameter have been measured. Original and processed images were evaluated by measuring the vessel diameter (in pixels) on an established direction and for the precise anatomic location. The vessel diameter is calculated using the plugin ImageJ. Mean diameter measurements differ significantly across the same segment and for different processing techniques. The best results are provided by the Wiener filter and linear contrast adjustment methods and the worst by Marr-Hildreth filter.

  19. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  20. Adaptive color image watermarking based on the just noticeable distortion model in balanced multiwavelet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Yuan; Ding, Yong

    2011-10-01

    In this paper, a novel adaptive color image watermarking scheme based on the just noticeable distortion (JND) model in balanced multiwavelet domain is proposed. The balanced multiwavelet transform can achieve orthogonality, symmetry, and high order of approximation simultaneously without requiring any input prefiltering, which makes it a good choice for image processing. According to the properties of the human visual system, a novel multiresolution JND model is proposed in balanced multiwavelet domain. This model incorporates the spatial contrast sensitivity function, the luminance adaptation effect, and the contrast masking effect via separating the sharp edge and the texture. Then, based on this model, the watermark is adaptively inserted into the most distortion tolerable locations of the luminance and chrominance components without introducing the perceivable distortions. Experimental results show that the proposed watermarking scheme is transparent and has a high robustness to various attacks such as low-pass filtering, noise attacking, JPEG and JPEG2000 compression.

  1. Fast Source Camera Identification Using Content Adaptive Guided Image Filter.

    PubMed

    Zeng, Hui; Kang, Xiangui

    2016-03-01

    Source camera identification (SCI) is an important topic in image forensics. One of the most effective fingerprints for linking an image to its source camera is the sensor pattern noise, which is estimated as the difference between the content and its denoised version. It is widely believed that the performance of the sensor-based SCI heavily relies on the denoising filter used. This study proposes a novel sensor-based SCI method using content adaptive guided image filter (CAGIF). Thanks to the low complexity nature of the CAGIF, the proposed method is much faster than the state-of-the-art methods, which is a big advantage considering the potential real-time application of SCI. Despite the advantage of speed, experimental results also show that the proposed method can achieve comparable or better performance than the state-of-the-art methods in terms of accuracy. PMID:27404627

  2. Applications Of Image Processing In Criminalistics

    NASA Astrophysics Data System (ADS)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  3. Adaptive codebook selection schemes for image classification in correlated channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia Chang; Liu, Xiang Lian; Liu, Kuan-Fu

    2015-09-01

    The multiple-input multiple-output (MIMO) system with the use of transmit and receive antenna arrays achieves diversity and array gains via transmit beamforming. Due to the absence of full channel state information (CSI) at the transmitter, the transmit beamforming vector can be quantized at the receiver and sent back to the transmitter by a low-rate feedback channel, called limited feedback beamforming. One of the key roles of Vector Quantization (VQ) is how to generate a good codebook such that the distortion between the original image and the reconstructed image is the minimized. In this paper, a novel adaptive codebook selection scheme for image classification is proposed with taking both spatial and temporal correlation inherent in the channel into consideration. The new codebook selection algorithm is developed to select two codebooks from the discrete Fourier transform (DFT) codebook, the generalized Lloyd algorithm (GLA) codebook and the Grassmannian codebook to be combined and used as candidates of the original image and the reconstructed image for image transmission. The channel is estimated and divided into four regions based on the spatial and temporal correlation of the channel and an appropriate codebook is assigned to each region. The proposed method can efficiently reduce the required information of feedback under the spatially and temporally correlated channels, where each region is adaptively. Simulation results show that in the case of temporally and spatially correlated channels, the bit-error-rate (BER) performance can be improved substantially by the proposed algorithm compared to the one with only single codebook.

  4. Breast image feature learning with adaptive deconvolutional networks

    NASA Astrophysics Data System (ADS)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  5. Sensory Processing Subtypes in Autism: Association with Adaptive Behavior

    ERIC Educational Resources Information Center

    Lane, Alison E.; Young, Robyn L.; Baker, Amy E. Z.; Angley, Manya T.

    2010-01-01

    Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes…

  6. An adaptive PCA fusion method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Guo, Qing; Li, An; Zhang, Hongqun; Feng, Zhongkui

    2014-10-01

    The principal component analysis (PCA) method is a popular fusion method used for its efficiency and high spatial resolution improvement. However, the spectral distortion is often found in PCA. In this paper, we propose an adaptive PCA method to enhance the spectral quality of the fused image. The amount of spatial details of the panchromatic (PAN) image injected into each band of the multi-spectral (MS) image is appropriately determined by a weighting matrix, which is defined by the edges of the PAN image, the edges of the MS image and the proportions between MS bands. In order to prove the effectiveness of the proposed method, the qualitative visual and quantitative analyses are introduced. The correlation coefficient (CC), the spectral discrepancy (SPD), and the spectral angle mapper (SAM) are used to measure the spectral quality of each fused band image. Q index is calculated to evaluate the global spectral quality of all the fused bands as a whole. The spatial quality is evaluated by the average gradient (AG) and the standard deviation (STD). Experimental results show that the proposed method improves the spectral quality very much comparing to the original PCA method while maintaining the high spatial quality of the original PCA.

  7. Image-quality metrics for characterizing adaptive optics system performance.

    PubMed

    Brigantic, R T; Roggemann, M C; Bauer, K W; Welsh, B M

    1997-09-10

    Adaptive optics system (AOS) performance is a function of the system design, seeing conditions, and light level of the wave-front beacon. It is desirable to optimize the controllable parameters in an AOS to maximize some measure of performance. For this optimization to be useful, it is necessary that a set of image-quality metrics be developed that vary monotonically with the AOS performance under a wide variety of imaging environments. Accordingly, as conditions change, one can be confident that the computed metrics dictate appropriate system settings that will optimize performance. Three such candidate metrics are presented. The first is the Strehl ratio; the second is a novel metric that modifies the Strehl ratio by integration of the modulus of the average system optical transfer function to a noise-effective cutoff frequency at which some specified image spectrum signal-to-noise ratio level is attained; and the third is simply the cutoff frequency just mentioned. It is shown that all three metrics are correlated with the rms error (RMSE) between the measured image and the associated diffraction-limited image. Of these, the Strehl ratio and the modified Strehl ratio exhibit consistently high correlations with the RMSE across a broad range of conditions and system settings. Furthermore, under conditions that yield a constant average system optical transfer function, the modified Strehl ratio can still be used to delineate image quality, whereas the Strehl ratio cannot.

  8. Adaptive methods of two-scale edge detection in post-enhancement visual pattern processing

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2008-04-01

    Adaptive methods are defined and experimentally studied for a two-scale edge detection process that mimics human visual perception of edges and is inspired by the parvo-cellular (P) and magno-cellular (M) physiological subsystems of natural vision. This two-channel processing consists of a high spatial acuity/coarse contrast channel (P) and a coarse acuity/fine contrast (M) channel. We perform edge detection after a very strong non-linear image enhancement that uses smart Retinex image processing. Two conditions that arise from this enhancement demand adaptiveness in edge detection. These conditions are the presence of random noise further exacerbated by the enhancement process, and the equally random occurrence of dense textural visual information. We examine how to best deal with both phenomena with an automatic adaptive computation that treats both high noise and dense textures as too much information, and gracefully shifts from a smallscale to medium-scale edge pattern priorities. This shift is accomplished by using different edge-enhancement schemes that correspond with the (P) and (M) channels of the human visual system. We also examine the case of adapting to a third image condition, namely too little visual information, and automatically adjust edge detection sensitivities when sparse feature information is encountered. When this methodology is applied to a sequence of images of the same scene but with varying exposures and lighting conditions, this edge-detection process produces pattern constancy that is very useful for several imaging applications that rely on image classification in variable imaging conditions.

  9. Adaptive geodesic transform for segmentation of vertebrae on CT images

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  10. An adaptive-optics scanning laser ophthalmoscope for imaging murine retinal microstructure

    NASA Astrophysics Data System (ADS)

    Alt, Clemens; Biss, David P.; Tajouri, Nadja; Jakobs, Tatjana C.; Lin, Charles P.

    2010-02-01

    In vivo retinal imaging is an outstanding tool to observe biological processes unfold in real-time. The ability to image microstructure in vivo can greatly enhance our understanding of function in retinal microanatomy under normal conditions and in disease. Transgenic mice are frequently used for mouse models of retinal diseases. However, commercially available retinal imaging instruments lack the optical resolution and spectral flexibility necessary to visualize detail comprehensively. We developed an adaptive optics scanning laser ophthalmoscope (AO-SLO) specifically for mouse eyes. Our SLO is a sensor-less adaptive optics system (no Shack Hartmann sensor) that employs a stochastic parallel gradient descent algorithm to modulate a deformable mirror, ultimately aiming to correct wavefront aberrations by optimizing confocal image sharpness. The resulting resolution allows detailed observation of retinal microstructure. The AO-SLO can resolve retinal microglia and their moving processes, demonstrating that microglia processes are highly motile, constantly probing their immediate environment. Similarly, retinal ganglion cells are imaged along with their axons and sprouting dendrites. Retinal blood vessels are imaged both using evans blue fluorescence and backscattering contrast.

  11. Adaptive and Background-Aware GAL4 Expression Enhancement of Co-registered Confocal Microscopy Images.

    PubMed

    Trapp, Martin; Schulze, Florian; Novikov, Alexey A; Tirian, Laszlo; J Dickson, Barry; Bühler, Katja

    2016-04-01

    GAL4 gene expression imaging using confocal microscopy is a common and powerful technique used to study the nervous system of a model organism such as Drosophila melanogaster. Recent research projects focused on high throughput screenings of thousands of different driver lines, resulting in large image databases. The amount of data generated makes manual assessment tedious or even impossible. The first and most important step in any automatic image processing and data extraction pipeline is to enhance areas with relevant signal. However, data acquired via high throughput imaging tends to be less then ideal for this task, often showing high amounts of background signal. Furthermore, neuronal structures and in particular thin and elongated projections with a weak staining signal are easily lost. In this paper we present a method for enhancing the relevant signal by utilizing a Hessian-based filter to augment thin and weak tube-like structures in the image. To get optimal results, we present a novel adaptive background-aware enhancement filter parametrized with the local background intensity, which is estimated based on a common background model. We also integrate recent research on adaptive image enhancement into our approach, allowing us to propose an effective solution for known problems present in confocal microscopy images. We provide an evaluation based on annotated image data and compare our results against current state-of-the-art algorithms. The results show that our algorithm clearly outperforms the existing solutions. PMID:26743993

  12. Processing infrared images for target detection: A literature study

    NASA Astrophysics Data System (ADS)

    Alblas, B. P.

    1988-07-01

    Methods of image processing applied to IR images to obtain better detection and/or recognition of military targets, particularly vehicles, are reviewed. The following subjects are dealt with: histogram specification, scanline degradation, correlation, clutter and noise. Only a few studies deal with the effects of image processing on human performance. Most of the literature concerns computer vision. Local adaptive and image dependent techniques appear to be the most promising methods of obtaining higher observation performance. In particular the size-contrast box filter and histogram specification methods seem to be suitable. There is a need for a generally applicable definition of image quality and clutter level to evaluate the utility of a specified algorithm. Proposals for further research are given.

  13. Shape-model-based adaptation of 3D deformable meshes for segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Pekar, Vladimir; Kaus, Michael R.; Lorenz, Cristian; Lobregt, Steven; Truyen, Roel; Weese, Juergen

    2001-07-01

    Segmentation methods based on adaptation of deformable models have found numerous applications in medical image analysis. Many efforts have been made in the recent years to improve their robustness and reliability. In particular, increasingly more methods use a priori information about the shape of the anatomical structure to be segmented. This reduces the risk of the model being attracted to false features in the image and, as a consequence, makes the need of close initialization, which remains the principal limitation of elastically deformable models, less crucial for the segmentation quality. In this paper, we present a novel segmentation approach which uses a 3D anatomical statistical shape model to initialize the adaptation process of a deformable model represented by a triangular mesh. As the first step, the anatomical shape model is parametrically fitted to the structure of interest in the image. The result of this global adaptation is used to initialize the local mesh refinement based on an energy minimization. We applied our approach to segment spine vertebrae in CT datasets. The segmentation quality was quantitatively assessed for 6 vertebrae, from 2 datasets, by computing the mean and maximum distance between the adapted mesh and a manually segmented reference shape. The results of the study show that the presented method is a promising approach for segmentation of complex anatomical structures in medical images.

  14. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  15. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  16. High dynamic range image rendering with a Retinex-based adaptive filter.

    PubMed

    Meylan, Laurence; Süsstrunk, Sabine

    2006-09-01

    We propose a new method to render high dynamic range images that models global and local adaptation of the human visual system. Our method is based on the center-surround Retinex model. The novelties of our method is first to use an adaptive filter, whose shape follows the image high-contrast edges, thus reducing halo artifacts common to other methods. Second, only the luminance channel is processed, which is defined by the first component of a principal component analysis. Principal component analysis provides orthogonality between channels and thus reduces the chromatic changes caused by the modification of luminance. We show that our method efficiently renders high dynamic range images and we compare our results with the current state of the art. PMID:16948325

  17. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  18. Adaptive downsampling to improve image compression at low bit rates.

    PubMed

    Lin, Weisi; Dong, Li

    2006-09-01

    At low bit rates, better coding quality can be achieved by downsampling the image prior to compression and estimating the missing portion after decompression. This paper presents a new algorithm in such a paradigm, based on the adaptive decision of appropriate downsampling directions/ratios and quantization steps, in order to achieve higher coding quality with low bit rates with the consideration of local visual significance. The full-resolution image can be restored from the DCT coefficients of the downsampled pixels so that the spatial interpolation required otherwise is avoided. The proposed algorithm significantly raises the critical bit rate to approximately 1.2 bpp, from 0.15-0.41 bpp in the existing downsample-prior-to-JPEG schemes and, therefore, outperforms the standard JPEG method in a much wider bit-rate scope. The experiments have demonstrated better PSNR improvement over the existing techniques before the critical bit rate. In addition, the adaptive mode decision not only makes the critical bit rate less image-independent, but also automates the switching coders in variable bit-rate applications, since the algorithm turns to the standard JPEG method whenever it is necessary at higher bit rates.

  19. Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena

    2012-04-01

    Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an

  20. Focusing a NIR adaptive optics imager; experience with GSAOI

    NASA Astrophysics Data System (ADS)

    Doolan, Matthew; Bloxham, Gabe; Conroy, Peter; Jones, Damien; McGregor, Peter; Stevanovic, Dejan; Van Harmelen, Jan; Waldron, Liam E.; Waterson, Mark; Zhelem, Ross

    2006-06-01

    The Gemini South Adaptive Optics Imager (GSAOI) to be used with the Multi-Conjugate Adaptive Optics (MCAO) system at Gemini South is currently in the final stages of assembly and testing. GSAOI uses a suite of 26 different filters, made from both BK7 and Fused Silica substrates. These filters, located in a non-collimated beam, work as active optical elements. The optical design was undertaken to ensure that both the filter substrates both focused longitudinally at the same point. During the testing of the instrument it was found that longitudinal focus was filter dependant. The methods used to investigate this are outlined in the paper. These investigations identified several possible causes for the focal shift including substrate material properties in cryogenic conditions and small amounts of residual filter power.

  1. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  2. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  3. Integrating digital topology in image-processing libraries.

    PubMed

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  4. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGES

    Collette, R.; King, J.; Keiser, Jr., D.; Miller, B.; Madden, J.; Schulthess, J.

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  5. Adaptive regularized scheme for remote sensing image fusion

    NASA Astrophysics Data System (ADS)

    Tang, Sizhang; Shen, Chaomin; Zhang, Guixu

    2016-06-01

    We propose an adaptive regularized algorithm for remote sensing image fusion based on variational methods. In the algorithm, we integrate the inputs using a "grey world" assumption to achieve visual uniformity. We propose a fusion operator that can automatically select the total variation (TV)-L1 term for edges and L2-terms for non-edges. To implement our algorithm, we use the steepest descent method to solve the corresponding Euler-Lagrange equation. Experimental results show that the proposed algorithm achieves remarkable results.

  6. Adaptive constructive processes and the future of memory

    PubMed Central

    Schacter, Daniel L.

    2013-01-01

    Memory serves critical functions in everyday life, but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, or illusions. The article describes several types of memory errors that are produced by adaptive constructive processes, and focuses in particular on the process of imagining or simulating events that might occur in one’s personal future. Simulating future events relies on many of the same cognitive and neural processes as remembering past events, which may help to explain why imagination and memory can be easily confused. The article considers both pitfalls and adaptive aspects of future event simulation in the context of research on planning, prediction, problem solving, mind-wandering, prospective and retrospective memory, coping and positivity bias, and the interconnected set of brain regions known as the default network. PMID:23163437

  7. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  8. A synoptic description of coal basins via image processing

    NASA Technical Reports Server (NTRS)

    Farrell, K. W., Jr.; Wherry, D. B.

    1978-01-01

    An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology.

  9. Adaptive stereo medical image watermarking using non-corresponding blocks.

    PubMed

    Mohaghegh, H; Karimi, N; Soroushmehr, S M R; Samavi, S; Najarian, K

    2015-08-01

    Today with the advent of technology in different medical imaging fields, the use of stereoscopic images has increased. Furthermore, with the rapid growth in telemedicine for remote diagnosis, treatment, and surgery, there is a need for watermarking. This is for copyright protection and tracking of digital media. Also, the efficient use of bandwidth for transmission of such data is another concern. In this paper an adaptive watermarking scheme is proposed that considers human visual system in depth perception. Our proposed scheme modifies maximum singular values of wavelet coefficients of stereo pair for embedding watermark bits. Experimental results show high 3D visual quality of watermarked video frames. Moreover, comparison with a compatible state of the art method shows that the proposed method is highly robust against attacks such as AWGN, salt and pepper noise, and JPEG compression. PMID:26737224

  10. Adaptive stereo medical image watermarking using non-corresponding blocks.

    PubMed

    Mohaghegh, H; Karimi, N; Soroushmehr, S M R; Samavi, S; Najarian, K

    2015-01-01

    Today with the advent of technology in different medical imaging fields, the use of stereoscopic images has increased. Furthermore, with the rapid growth in telemedicine for remote diagnosis, treatment, and surgery, there is a need for watermarking. This is for copyright protection and tracking of digital media. Also, the efficient use of bandwidth for transmission of such data is another concern. In this paper an adaptive watermarking scheme is proposed that considers human visual system in depth perception. Our proposed scheme modifies maximum singular values of wavelet coefficients of stereo pair for embedding watermark bits. Experimental results show high 3D visual quality of watermarked video frames. Moreover, comparison with a compatible state of the art method shows that the proposed method is highly robust against attacks such as AWGN, salt and pepper noise, and JPEG compression.

  11. Fourier transform digital holographic adaptive optics imaging system

    PubMed Central

    Liu, Changgeng; Yu, Xiao; Kim, Myung K.

    2013-01-01

    A Fourier transform digital holographic adaptive optics imaging system and its basic principles are proposed. The CCD is put at the exact Fourier transform plane of the pupil of the eye lens. The spherical curvature introduced by the optics except the eye lens itself is eliminated. The CCD is also at image plane of the target. The point-spread function of the system is directly recorded, making it easier to determine the correct guide-star hologram. Also, the light signal will be stronger at the CCD, especially for phase-aberration sensing. Numerical propagation is avoided. The sensor aperture has nothing to do with the resolution and the possibility of using low coherence or incoherent illumination is opened. The system becomes more efficient and flexible. Although it is intended for ophthalmic use, it also shows potential application in microscopy. The robustness and feasibility of this compact system are demonstrated by simulations and experiments using scattering objects. PMID:23262541

  12. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  13. True-Time-Delay Adaptive Array Processing Using Photorefractive Crystals

    NASA Astrophysics Data System (ADS)

    Kriehn, G. R.; Wagner, K.

    Radio frequency (RF) signal processing has proven to be a fertile application area when using photorefractive-based, optical processing techniques. This is due to a photorefractive material's capability to record gratings and diffract off these gratings with optically modulated beams that contain a wide RF bandwidth, and include applications such as the bias-free time-integrating correlator [1], adaptive signal processing, and jammer excision, [2, 3, 4]. Photorefractive processing of signals from RF antenna arrays is especially appropriate because of the massive parallelism that is readily achievable in a photorefractive crystal (in which many resolvable beams can be incident on a single crystal simultaneously—each coming from an optical modulator driven by a separate RF antenna element), and because a number of approaches for adaptive array processing using photorefractive crystals have been successfully investigated [5, 6]. In these types of applications, the adaptive weight coefficients are represented by the amplitude and phase of the holographic gratings, and many millions of such adaptive weights can be multiplexed within the volume of a photorefractive crystal. RF modulated optical signals from each array element are diffracted from the adaptively recorded photorefractive gratings (which can be multiplexed either angularly or spatially), and are then coherently combined with the appropriate amplitude weights and phase shifts to effectively steer the angular receptivity pattern of the antenna array toward the desired arriving signal. Likewise, the antenna nulls can also be rotated toward unwanted narrowband jammers for extinction, thereby optimizing the signal-to-interference-plus-noise ratio.

  14. Image processing applied to laser cladding process

    SciTech Connect

    Meriaudeau, F.; Truchetet, F.

    1996-12-31

    The laser cladding process, which consists of adding a melt powder to a substrate in order to improve or change the behavior of the material against corrosion, fatigue and so on, involves a lot of parameters. In order to perform good tracks some parameters need to be controlled during the process. The authors present here a low cost performance system using two CCD matrix cameras. One camera provides surface temperature measurements while the other gives information relative to the powder distribution or geometric characteristics of the tracks. The surface temperature (thanks to Beer Lambert`s law) enables one to detect variations in the mass feed rate. Using such a system the authors are able to detect fluctuation of 2 to 3g/min in the mass flow rate. The other camera gives them information related to the powder distribution, a simple algorithm applied to the data acquired from the CCD matrix camera allows them to see very weak fluctuations within both gaz flux (carriage or protection gaz). During the process, this camera is also used to perform geometric measurements. The height and the width of the track are obtained in real time and enable the operator to find information related to the process parameters such as the speed processing, the mass flow rate. The authors display the result provided by their system in order to enhance the efficiency of the laser cladding process. The conclusion is dedicated to a summary of the presented works and the expectations for the future.

  15. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  16. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  17. Adaptive optics scanning laser ophthalmoscope imaging: technology update.

    PubMed

    Merino, David; Loza-Alvarez, Pablo

    2016-01-01

    Adaptive optics (AO) retinal imaging has become very popular in the past few years, especially within the ophthalmic research community. Several different retinal techniques, such as fundus imaging cameras or optical coherence tomography systems, have been coupled with AO in order to produce impressive images showing individual cell mosaics over different layers of the in vivo human retina. The combination of AO with scanning laser ophthalmoscopy has been extensively used to generate impressive images of the human retina with unprecedented resolution, showing individual photoreceptor cells, retinal pigment epithelium cells, as well as microscopic capillary vessels, or the nerve fiber layer. Over the past few years, the technique has evolved to develop several different applications not only in the clinic but also in different animal models, thanks to technological developments in the field. These developments have specific applications to different fields of investigation, which are not limited to the study of retinal diseases but also to the understanding of the retinal function and vision science. This review is an attempt to summarize these developments in an understandable and brief manner in order to guide the reader into the possibilities that AO scanning laser ophthalmoscopy offers, as well as its limitations, which should be taken into account when planning on using it.

  18. Adaptive optics scanning laser ophthalmoscope imaging: technology update

    PubMed Central

    Merino, David; Loza-Alvarez, Pablo

    2016-01-01

    Adaptive optics (AO) retinal imaging has become very popular in the past few years, especially within the ophthalmic research community. Several different retinal techniques, such as fundus imaging cameras or optical coherence tomography systems, have been coupled with AO in order to produce impressive images showing individual cell mosaics over different layers of the in vivo human retina. The combination of AO with scanning laser ophthalmoscopy has been extensively used to generate impressive images of the human retina with unprecedented resolution, showing individual photoreceptor cells, retinal pigment epithelium cells, as well as microscopic capillary vessels, or the nerve fiber layer. Over the past few years, the technique has evolved to develop several different applications not only in the clinic but also in different animal models, thanks to technological developments in the field. These developments have specific applications to different fields of investigation, which are not limited to the study of retinal diseases but also to the understanding of the retinal function and vision science. This review is an attempt to summarize these developments in an understandable and brief manner in order to guide the reader into the possibilities that AO scanning laser ophthalmoscopy offers, as well as its limitations, which should be taken into account when planning on using it. PMID:27175057

  19. Extreme learning machine and adaptive sparse representation for image classification.

    PubMed

    Cao, Jiuwen; Zhang, Kai; Luo, Minxia; Yin, Chun; Lai, Xiaoping

    2016-09-01

    Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. Those two methods, however, have their respective drawbacks, e.g., in general, ELM is known to be less robust to noise while SRC is known to be time-consuming. Consequently, ELM and SRC complement each other in computational complexity and classification accuracy. In order to unify such mutual complementarity and thus further enhance the classification performance, we propose an efficient hybrid classifier to exploit the advantages of ELM and SRC in this paper. More precisely, the proposed classifier consists of two stages: first, an ELM network is trained by supervised learning. Second, a discriminative criterion about the reliability of the obtained ELM output is adopted to decide whether the query image can be correctly classified or not. If the output is reliable, the classification will be performed by ELM; otherwise the query image will be fed to SRC. Meanwhile, in the stage of SRC, a sub-dictionary that is adaptive to the query image instead of the entire dictionary is extracted via the ELM output. The computational burden of SRC thus can be reduced. Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency.

  20. Extreme learning machine and adaptive sparse representation for image classification.

    PubMed

    Cao, Jiuwen; Zhang, Kai; Luo, Minxia; Yin, Chun; Lai, Xiaoping

    2016-09-01

    Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. Those two methods, however, have their respective drawbacks, e.g., in general, ELM is known to be less robust to noise while SRC is known to be time-consuming. Consequently, ELM and SRC complement each other in computational complexity and classification accuracy. In order to unify such mutual complementarity and thus further enhance the classification performance, we propose an efficient hybrid classifier to exploit the advantages of ELM and SRC in this paper. More precisely, the proposed classifier consists of two stages: first, an ELM network is trained by supervised learning. Second, a discriminative criterion about the reliability of the obtained ELM output is adopted to decide whether the query image can be correctly classified or not. If the output is reliable, the classification will be performed by ELM; otherwise the query image will be fed to SRC. Meanwhile, in the stage of SRC, a sub-dictionary that is adaptive to the query image instead of the entire dictionary is extracted via the ELM output. The computational burden of SRC thus can be reduced. Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency. PMID:27389571

  1. Extended adaptive filtering for wide-angle SAR image formation

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Roberts, William; Li, Jian

    2005-05-01

    For two-dimensional (2-D) spectral analysis, the adaptive filtering based technologies, such as CAPON and APES (Amplitude and Phase EStimation), are developed under the implicit assumption that the data sets are rectangular. However, in real SAR applications, especially for the wide-angle cases, the collected data sets are always non-rectangular. This raises the problem of how to extend the original adaptive filtering based algorithms for such kind of scenarios. In this paper, we propose an extended adaptive filtering (EAF) approach, which includes Extended APES (E-APES) and Extended CAPON (E-CAPON), for arbitrarily shaped 2-D data. The EAF algorithms adopt a missing-data approach where the unavailable data samples close to the collected data set are assumed missing. Using a group of filter-banks with varying sizes, these algorithms are non-iterative and do not require the estimation of the unavailable samples. The improved imaging results of the proposed algorithms are demonstrated by applying them to two different SAR data sets.

  2. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  3. Image Processing: A State-of-the-Art Way to Learn Science.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    Teachers participating in the Image Processing for Teaching Process, begun at the University of Arizona's Lunar and Planetary Laboratory in 1989, find this technology ideal for encouraging student discovery, promoting constructivist science or math experiences, and adapting in classrooms. Because image processing is not a computerized text, it…

  4. Adaptive Optics and Lucky Imager (AOLI): presentation and first light

    NASA Astrophysics Data System (ADS)

    Velasco, S.; Rebolo, R.; Mackay, C.; Oscoz, A.; King, D. L.; Crass, J.; Díaz-Sánchez, A.; Femenía, B.; González-Escalera, V.; Labadie, L.; López, R. L.; Pérez Garrido, A.; Puga, M.; Rodríguez-Ramos, L. F.; Zuther, J.

    2015-05-01

    In this paper we present the Adaptive Optics Lucky Imager (AOLI), a state-of-the-art instrument which makes use of two well proved techniques for extremely high spatial resolution with ground-based telescopes: Lucky Imaging (LI) and Adaptive Optics (AO). AOLI comprises an AO system, including a low order non-linear curvature wavefront sensor together with a 241 actuators deformable mirror, a science array of four 1024x1024 EMCCDs, allowing a 120×120 down to 36×36" field of view, a calibration subsystem and a powerful LI software. Thanks to the revolutionary WFS, AOLI shall have the capability of using faint reference stars (I˜16.5-17.5), enabling it to be used over a much wider part of the sky than with common Shack-Hartmann AO systems. This instrument saw first light in September 2013 at William Herschel Telescope. Although the instrument was not complete, these commissioning demonstrated its feasibility, obtaining a FWHM for the best PSF of 0.151±0.005" and a plate scale of 55.0±0.3 {mas} {pix}^{-1}. Those observations served us to prove some characteristics of the interesting multiple T Tauri system LkHα 262-263, finding it to be gravitationally bounded. This interesting multiple system mixes the presence of proto-planetary discs, one proved to be double, and the first-time optically resolved pair LkHα 263AB (0.42" separation).

  5. Adaptation of commercial microscopes for advanced imaging applications

    NASA Astrophysics Data System (ADS)

    Brideau, Craig; Poon, Kelvin; Stys, Peter

    2015-03-01

    Today's commercially available microscopes offer a wide array of options to accommodate common imaging experiments. Occasionally, an experimental goal will require an unusual light source, filter, or even irregular sample that is not compatible with existing equipment. In these situations the ability to modify an existing microscopy platform with custom accessories can greatly extend its utility and allow for experiments not possible with stock equipment. Light source conditioning/manipulation such as polarization, beam diameter or even custom source filtering can easily be added with bulk components. Custom and after-market detectors can be added to external ports using optical construction hardware and adapters. This paper will present various examples of modifications carried out on commercial microscopes to address both atypical imaging modalities and research needs. Violet and near-ultraviolet source adaptation, custom detection filtering, and laser beam conditioning and control modifications will be demonstrated. The availability of basic `building block' parts will be discussed with respect to user safety, construction strategies, and ease of use.

  6. Adaptive Optics Imaging Survey of Luminous Infrared Galaxies

    SciTech Connect

    Laag, E A; Canalizo, G; van Breugel, W; Gates, E L; de Vries, W; Stanford, S A

    2006-03-13

    We present high resolution imaging observations of a sample of previously unidentified far-infrared galaxies at z < 0.3. The objects were selected by cross-correlating the IRAS Faint Source Catalog with the VLA FIRST catalog and the HST Guide Star Catalog to allow for adaptive optics observations. We found two new ULIGs (with L{sub FIR} {ge} 10{sup 12} L{sub {circle_dot}}) and 19 new LIGs (with L{sub FIR} {ge} 10{sup 11} L{sub {circle_dot}}). Twenty of the galaxies in the sample were imaged with either the Lick or Keck adaptive optics systems in H or K{prime}. Galaxy morphologies were determined using the two dimensional fitting program GALFIT and the residuals examined to look for interesting structure. The morphologies reveal that at least 30% are involved in tidal interactions, with 20% being clear mergers. An additional 50% show signs of possible interaction. Line ratios were used to determine powering mechanism; of the 17 objects in the sample showing clear emission lines--four are active galactic nuclei and seven are starburst galaxies. The rest exhibit a combination of both phenomena.

  7. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  8. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  9. Applying statistical process control to the adaptive rate control problem

    NASA Astrophysics Data System (ADS)

    Manohar, Nelson R.; Willebeek-LeMair, Marc H.; Prakash, Atul

    1997-12-01

    Due to the heterogeneity and shared resource nature of today's computer network environments, the end-to-end delivery of multimedia requires adaptive mechanisms to be effective. We present a framework for the adaptive streaming of heterogeneous media. We introduce the application of online statistical process control (SPC) to the problem of dynamic rate control. In SPC, the goal is to establish (and preserve) a state of statistical quality control (i.e., controlled variability around a target mean) over a process. We consider the end-to-end streaming of multimedia content over the internet as the process to be controlled. First, at each client, we measure process performance and apply statistical quality control (SQC) with respect to application-level requirements. Then, we guide an adaptive rate control (ARC) problem at the server based on the statistical significance of trends and departures on these measurements. We show this scheme facilitates handling of heterogeneous media. Last, because SPC is designed to monitor long-term process performance, we show that our online SPC scheme could be used to adapt to various degrees of long-term (network) variability (i.e., statistically significant process shifts as opposed to short-term random fluctuations). We develop several examples and analyze its statistical behavior and guarantees.

  10. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  11. Theory of Adaptive Acquisition Method for Image Reconstruction from Projections and Application to EPR Imaging

    NASA Astrophysics Data System (ADS)

    Placidi, G.; Alecci, M.; Sotgiu, A.

    1995-07-01

    An adaptive method for selecting the projections to be used for image reconstruction is presented. The method starts with the acquisition of four projections at angles of 0°, 45°, 90°, 135° and selects the new angles by computing a function of the previous projections. This makes it possible to adapt the selection of projections to the arbitrary shape of the sample, thus measuring a more informative set of projections. When the sample is smooth or has internal symmetries, this technique allows a reduction in the number of projections required to reconstruct the image without loss of information. The method has been tested on simulated data at different values of signal-to-noise ratio (S/N) and on experimental data recorded by an EPR imaging apparatus.

  12. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  13. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  14. An efficient self-adaptive model for chaotic image encryption algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Xiaoling; Ye, Guodong

    2014-12-01

    In this paper, an efficient self-adaptive model for chaotic image encryption algorithm is proposed. With the help of the classical structure of permutation-diffusion and double simple two-dimensional chaotic systems, an efficient and fast encryption algorithm is designed. However, different from most of the existing methods which are found insecure upon chosen-plaintext or known-plaintext attack in the process of permutation or diffusion, the keystream generated in both operations of our method is dependent on the plain-image. Therefore, different plain-images will have different keystreams in both processes even just only a bit is changed in the plain-image. This design can solve the problem of fixed chaotic sequence produced by the same initial conditions but for different images. Moreover, the operation speed is high because complex mathematical methods, such as Runge-Kutta method, of solving the high-dimensional partial differential equations are avoided. Numerical experiments show that the proposed self-adaptive method can well resist against chosen-plaintext and known-plaintext attacks, and has high security and efficiency.

  15. A fast and efficient adaptive threshold rate control scheme for remote sensing images.

    PubMed

    Chen, Xiao; Xu, Xiaoqing

    2012-01-01

    The JPEG2000 image compression standard is ideal for processing remote sensing images. However, its algorithm is complex and it requires large amounts of memory, making it difficult to adapt to the limited transmission and storage resources necessary for remote sensing images. In the present study, an improved rate control algorithm for remote sensing images is proposed. The required coded blocks are sorted downward according to their numbers of bit planes prior to entropy coding. An adaptive threshold computed from the combination of the minimum number of bit planes, along with the minimum rate-distortion slope and the compression ratio, is used to truncate passes of each code block during Tier-1 encoding. This routine avoids the encoding of all code passes and improves the coding efficiency. The simulation results show that the computational cost and working buffer memory size of the proposed algorithm reach only 18.13 and 7.81%, respectively, of the same parameters in the postcompression rate distortion algorithm, while the peak signal-to-noise ratio across the images remains almost the same. The proposed algorithm not only greatly reduces the code complexity and buffer requirements but also maintains the image quality.

  16. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  17. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images

    PubMed Central

    Cunefare, David; Cooper, Robert F.; Higgins, Brian; Katz, David F.; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-01-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice’s coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice’s coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images. PMID:27231641

  18. Adaptive automatic segmentation of Leishmaniasis parasite in Indirect Immunofluorescence images.

    PubMed

    Ouertani, F; Amiri, H; Bettaib, J; Yazidi, R; Ben Salah, A

    2014-01-01

    This paper describes the first steps for the automation of the serum titration process. In fact, this process requires an Indirect Immunofluorescence (IIF) diagnosis automation. We deal with the initial phase that represents the fluorescence images segmentation. Our approach consists of three principle stages: (1) a color based segmentation which aims at extracting the fluorescent foreground based on k-means clustering, (2) the segmentation of the fluorescent clustered image, and (3) a region-based feature segmentation, intended to remove the fluorescent noisy regions and to locate fluorescent parasites. We evaluated the proposed method on 40 IIF images. Experimental results show that such a method provides reliable and robust automatic segmentation of fluorescent Promastigote parasite. PMID:25571049

  19. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  20. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  1. Adaptive HIFU noise cancellation for simultaneous therapy and imaging using an integrated HIFU/imaging transducer

    PubMed Central

    Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk

    2010-01-01

    It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than −40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of −20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe ‘ripples’ when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm × 20.0 mm dimensions

  2. Noise correlation-based adaptive polarimetric image representation for contrast enhancement of a polarized beacon in fog

    NASA Astrophysics Data System (ADS)

    Panigrahi, Swapnesh; Fade, Julien; Alouini, Mehdi

    2015-10-01

    We show the use of a simplified snapshot polarimetric camera along with an adaptive image processing for optimal detection of a polarized light beacon through fog. The adaptive representation is derived using theoretical noise analysis of the data at hand and is shown to be optimal in the Maximum likelihood sense. We report that the contrast enhancing optimal representation that depends on the background noise correlation differs in general from standard representations like polarimetric difference image or polarization filtered image. Lastly, we discuss a detection strategy to reduce the false positive counts.

  3. EUV imaging experiment of an adaptive optics telescope

    NASA Astrophysics Data System (ADS)

    Kitamoto, S.; Shibata, T.; Takenaka, E.; Yoshida, M.; Murakami, H.; Shishido, Y.; Gotoh, N.; Nagasaki, K.; Takei, D.; Morii, M.

    2009-08-01

    We report an experimental result of our normal-incident EUV telescope tuned to a 13.5 nm band, with an adaptive optics. The optics consists of a spherical primary mirror and a secondary mirror. Both are coated by Mo/Si multilayer. The diameter of the primary and the secondary mirrors are 80 mm and 55mm, respectively. The secondary mirror is a deformable mirror with 31 bimorph-piezo electrodes. The EUV from a laser plasma source was exposed to a Ni mesh with 31 micro-m wires. The image of this mesh was obtained by a backilluminated CCD. The reference wave was made by an optical laser source with 1 μm pin-hole. We measure the wave form of this reference wave and control the secondary mirror to get a good EUV image. Since the paths of EUV and the optical light for the reference were different from each other, we modify the target wave from to control the deformable mirror, as the EUV image is best. The higher order Zernike components of the target wave form, as well as the tilts and focus components, were added to the reference wave form made by simply calculated. We confirmed the validity of this control and performed a 2.1 arc-sec resolution.

  4. Local adaptive approach toward segmentation of microscopic images of activated sludge flocs

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Lo, Po Kim; Yap, Vooi Voon

    2015-11-01

    Activated sludge process is a widely used method to treat domestic and industrial effluents. The conditions of activated sludge wastewater treatment plant (AS-WWTP) are related to the morphological properties of flocs (microbial aggregates) and filaments, and are required to be monitored for normal operation of the plant. Image processing and analysis is a potential time-efficient monitoring tool for AS-WWTPs. Local adaptive segmentation algorithms are proposed for bright-field microscopic images of activated sludge flocs. Two basic modules are suggested for Otsu thresholding-based local adaptive algorithms with irregular illumination compensation. The performance of the algorithms has been compared with state-of-the-art local adaptive algorithms of Sauvola, Bradley, Feng, and c-mean. The comparisons are done using a number of region- and nonregion-based metrics at different microscopic magnifications and quantification of flocs. The performance metrics show that the proposed algorithms performed better and, in some cases, were comparable to the state-of the-art algorithms. The performance metrics were also assessed subjectively for their suitability for segmentations of activated sludge images. The region-based metrics such as false negative ratio, sensitivity, and negative predictive value gave inconsistent results as compared to other segmentation assessment metrics.

  5. Super Resolution Reconstruction Based on Adaptive Detail Enhancement for ZY-3 Satellite Images

    NASA Astrophysics Data System (ADS)

    Zhu, Hong; Song, Weidong; Tan, Hai; Wang, Jingxue; Jia, Di

    2016-06-01

    Super-resolution reconstruction of sequence remote sensing image is a technology which handles multiple low-resolution satellite remote sensing images with complementary information and obtains one or more high resolution images. The cores of the technology are high precision matching between images and high detail information extraction and fusion. In this paper puts forward a new image super resolution model frame which can adaptive multi-scale enhance the details of reconstructed image. First, the sequence images were decomposed into a detail layer containing the detail information and a smooth layer containing the large scale edge information by bilateral filter. Then, a texture detail enhancement function was constructed to promote the magnitude of the medium and small details. Next, the non-redundant information of the super reconstruction was obtained by differential processing of the detail layer, and the initial super resolution construction result was achieved by interpolating fusion of non-redundant information and the smooth layer. At last, the final reconstruction image was acquired by executing a local optimization model on the initial constructed image. Experiments on ZY-3 satellite images of same phase and different phase show that the proposed method can both improve the information entropy and the image details evaluation standard comparing with the interpolation method, traditional TV algorithm and MAP algorithm, which indicate that our method can obviously highlight image details and contains more ground texture information. A large number of experiment results reveal that the proposed method is robust and universal for different kinds of ZY-3 satellite images.

  6. Image processing for drawing recognition

    NASA Astrophysics Data System (ADS)

    Feyzkhanov, Rustem; Zhelavskaya, Irina

    2014-03-01

    The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.

  7. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  8. Adapting the Transtheoretical Model of Change to the Bereavement Process

    ERIC Educational Resources Information Center

    Calderwood, Kimberly A.

    2011-01-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals…

  9. Behavioral training promotes multiple adaptive processes following acute hearing loss

    PubMed Central

    Keating, Peter; Rosenior-Patten, Onayomi; Dahmen, Johannes C; Bell, Olivia; King, Andrew J

    2016-01-01

    The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI: http://dx.doi.org/10.7554/eLife.12264.001 PMID:27008181

  10. Adaptive beamforming for array signal processing in aeroacoustic measurements.

    PubMed

    Huang, Xun; Bai, Long; Vinogradov, Igor; Peers, Edward

    2012-03-01

    Phased microphone arrays have become an important tool in the localization of noise sources for aeroacoustic applications. In most practical aerospace cases the conventional beamforming algorithm of the delay-and-sum type has been adopted. Conventional beamforming cannot take advantage of knowledge of the noise field, and thus has poorer resolution in the presence of noise and interference. Adaptive beamforming has been used for more than three decades to address these issues and has already achieved various degrees of success in areas of communication and sonar. In this work an adaptive beamforming algorithm designed specifically for aeroacoustic applications is discussed and applied to practical experimental data. It shows that the adaptive beamforming method could save significant amounts of post-processing time for a deconvolution method. For example, the adaptive beamforming method is able to reduce the DAMAS computation time by at least 60% for the practical case considered in this work. Therefore, adaptive beamforming can be considered as a promising signal processing method for aeroacoustic measurements.

  11. Parallel digital signal processing architectures for image processing

    NASA Astrophysics Data System (ADS)

    Kshirsagar, Shirish P.; Hartley, David A.; Harvey, David M.; Hobson, Clifford A.

    1994-10-01

    This paper describes research into a high speed image processing system using parallel digital signal processors for the processing of electro-optic images. The objective of the system is to reduce the processing time of non-contact type inspection problems including industrial and medical applications. A single processor can not deliver sufficient processing power required for the use of applications hence, a MIMD system is designed and constructed to enable fast processing of electro-optic images. The Texas Instruments TMS320C40 digital signal processor is used due to its high speed floating point CPU and the support for the parallel processing environment. A custom designed VISION bus is provided to transfer images between processors. The system is being applied for solder joint inspection of high technology printed circuit boards.

  12. Adaptive recovery of motion blur point spread function from differently exposed images

    NASA Astrophysics Data System (ADS)

    Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian

    2010-01-01

    Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.

  13. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  14. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  15. Adaptive optics retinal imaging in the living mouse eye.

    PubMed

    Geng, Ying; Dubra, Alfredo; Yin, Lu; Merigan, William H; Sharma, Robin; Libby, Richard T; Williams, David R

    2012-04-01

    Correction of the eye's monochromatic aberrations using adaptive optics (AO) can improve the resolution of in vivo mouse retinal images [Biss et al., Opt. Lett. 32(6), 659 (2007) and Alt et al., Proc. SPIE 7550, 755019 (2010)], but previous attempts have been limited by poor spot quality in the Shack-Hartmann wavefront sensor (SHWS). Recent advances in mouse eye wavefront sensing using an adjustable focus beacon with an annular beam profile have improved the wavefront sensor spot quality [Geng et al., Biomed. Opt. Express 2(4), 717 (2011)], and we have incorporated them into a fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). The performance of the instrument was tested on the living mouse eye, and images of multiple retinal structures, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and fluorescently labeled ganglion cells were obtained. The in vivo transverse and axial resolutions of the fluorescence channel of the AOSLO were estimated from the full width half maximum (FWHM) of the line and point spread functions (LSF and PSF), and were found to be better than 0.79 μm ± 0.03 μm (STD)(45% wider than the diffraction limit) and 10.8 μm ± 0.7 μm (STD)(two times the diffraction limit), respectively. The axial positional accuracy was estimated to be 0.36 μm. This resolution and positional accuracy has allowed us to classify many ganglion cell types, such as bistratified ganglion cells, in vivo.

  16. Adaptive optics retinal imaging in the living mouse eye

    PubMed Central

    Geng, Ying; Dubra, Alfredo; Yin, Lu; Merigan, William H.; Sharma, Robin; Libby, Richard T.; Williams, David R.

    2012-01-01

    Correction of the eye’s monochromatic aberrations using adaptive optics (AO) can improve the resolution of in vivo mouse retinal images [Biss et al., Opt. Lett. 32(6), 659 (2007) and Alt et al., Proc. SPIE 7550, 755019 (2010)], but previous attempts have been limited by poor spot quality in the Shack-Hartmann wavefront sensor (SHWS). Recent advances in mouse eye wavefront sensing using an adjustable focus beacon with an annular beam profile have improved the wavefront sensor spot quality [Geng et al., Biomed. Opt. Express 2(4), 717 (2011)], and we have incorporated them into a fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). The performance of the instrument was tested on the living mouse eye, and images of multiple retinal structures, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and fluorescently labeled ganglion cells were obtained. The in vivo transverse and axial resolutions of the fluorescence channel of the AOSLO were estimated from the full width half maximum (FWHM) of the line and point spread functions (LSF and PSF), and were found to be better than 0.79 μm ± 0.03 μm (STD)(45% wider than the diffraction limit) and 10.8 μm ± 0.7 μm (STD)(two times the diffraction limit), respectively. The axial positional accuracy was estimated to be 0.36 μm. This resolution and positional accuracy has allowed us to classify many ganglion cell types, such as bistratified ganglion cells, in vivo. PMID:22574260

  17. Multiscale registration of planning CT and daily cone beam CT images for adaptive radiation therapy

    SciTech Connect

    Paquin, Dana; Levy, Doron; Xing Lei

    2009-01-15

    Adaptive radiation therapy (ART) is the incorporation of daily images in the radiotherapy treatment process so that the treatment plan can be evaluated and modified to maximize the amount of radiation dose to the tumor while minimizing the amount of radiation delivered to healthy tissue. Registration of planning images with daily images is thus an important component of ART. In this article, the authors report their research on multiscale registration of planning computed tomography (CT) images with daily cone beam CT (CBCT) images. The multiscale algorithm is based on the hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [Multiscale Model. Simul. 2(4), pp. 554-579 (2004)]. Registration is achieved by decomposing the images to be registered into a series of scales using the (BV, L{sup 2}) decomposition and initially registering the coarsest scales of the image using a landmark-based registration algorithm. The resulting transformation is then used as a starting point to deformably register the next coarse scales with one another. This procedure is iterated at each stage using the transformation computed by the previous scale registration as the starting point for the current registration. The authors present the results of studies of rectum, head-neck, and prostate CT-CBCT registration, and validate their registration method quantitatively using synthetic results in which the exact transformations our known, and qualitatively using clinical deformations in which the exact results are not known.

  18. Quantum state and process tomography via adaptive measurements

    NASA Astrophysics Data System (ADS)

    Wang, HengYan; Zheng, WenQiang; Yu, NengKun; Li, KeRen; Lu, DaWei; Xin, Tao; Li, Carson; Ji, ZhengFeng; Kribs, David; Zeng, Bei; Peng, XinHua; Du, JiangFeng

    2016-10-01

    We investigate quantum state tomography (QST) for pure states and quantum process tomography (QPT) for unitary channels via adaptive measurements. For a quantum system with a d-dimensional Hilbert space, we first propose an adaptive protocol where only 2 d - 1 measurement outcomes are used to accomplish the QST for all pure states. This idea is then extended to study QPT for unitary channels, where an adaptive unitary process tomography (AUPT) protocol of d 2+ d-1 measurement outcomes is constructed for any unitary channel. We experimentally implement the AUPT protocol in a 2-qubit nuclear magnetic resonance system. We examine the performance of the AUPT protocol when applied to Hadamard gate, T gate ( π/8 phase gate), and controlled-NOT gate, respectively, as these gates form the universal gate set for quantum information processing purpose. As a comparison, standard QPT is also implemented for each gate. Our experimental results show that the AUPT protocol that reconstructing unitary channels via adaptive measurements significantly reduce the number of experiments required by standard QPT without considerable loss of fidelity.

  19. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  20. Medical image classification using spatial adjacent histogram based on adaptive local binary patterns.

    PubMed

    Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling

    2016-05-01

    Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. PMID:27058283

  1. Medical image classification using spatial adjacent histogram based on adaptive local binary patterns.

    PubMed

    Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling

    2016-05-01

    Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches.

  2. Adaptive image warping for hole prevention in 3D view synthesis.

    PubMed

    Plath, Nils; Knorr, Sebastian; Goldmann, Lutz; Sikora, Thomas

    2013-09-01

    Increasing popularity of 3D videos calls for new methods to ease the conversion process of existing monocular video to stereoscopic or multi-view video. A popular way to convert video is given by depth image-based rendering methods, in which a depth map that is associated with an image frame is used to generate a virtual view. Because of the lack of knowledge about the 3D structure of a scene and its corresponding texture, the conversion of 2D video, inevitably, however, leads to holes in the resulting 3D image as a result of newly-exposed areas. The conversion process can be altered such that no holes become visible in the resulting 3D view by superimposing a regular grid over the depth map and deforming it. In this paper, an adaptive image warping approach as an improvement to the regular approach is proposed. The new algorithm exploits the smoothness of a typical depth map to reduce the complexity of the underlying optimization problem that is necessary to find the deformation, which is required to prevent holes. This is achieved by splitting a depth map into blocks of homogeneous depth using quadtrees and running the optimization on the resulting adaptive grid. The results show that this approach leads to a considerable reduction of the computational complexity while maintaining the visual quality of the synthesized views. PMID:23782807

  3. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  4. High-accuracy wavefront control for retinal imaging with Adaptive-Influence-Matrix Adaptive Optics

    PubMed Central

    Zou, Weiyao; Burns, Stephen A.

    2010-01-01

    We present an iterative technique for improving adaptive optics (AO) wavefront correction for retinal imaging, called the Adaptive-Influence-Matrix (AIM) method. This method is based on the fact that the deflection-to-voltage relation of common deformable mirrors used in AO are nonlinear, and the fact that in general the wavefront errors of the eye can be considered to be composed of a static, non-zero wavefront error (such as the defocus and astigmatism), and a time-varying wavefront error. The aberrated wavefront is first corrected with a generic influence matrix, providing a mirror compensation figure for the static wavefront error. Then a new influence matrix that is more accurate for the specific static wavefront error is calibrated based on the mirror compensation figure. Experimental results show that with the AIM method the AO wavefront correction accuracy can be improved significantly in comparison to the generic AO correction. The AIM method is most useful in AO modalities where there are large static contributions to the wavefront aberrations. PMID:19997241

  5. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  6. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  7. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  8. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  9. The adaptive FEM elastic model for medical image registration.

    PubMed

    Zhang, Jingya; Wang, Jiajun; Wang, Xiuying; Feng, Dagan

    2014-01-01

    This paper proposes an adaptive mesh refinement strategy for the finite element method (FEM) based elastic registration model. The signature matrix for mesh refinement takes into account the regional intensity variance and the local deformation displacement. The regional intensity variance reflects detailed information for improving registration accuracy and the deformation displacement fine-tunes the mesh refinement for a more efficient algorithm. The gradient flows of two different similarity metrics, the sum of the squared difference and the spatially encoded mutual information for the mono-modal and multi-modal registrations, are used to derive external forces to drive the model to the equilibrium state. We compared our approach to three other models: (1) the conventional multi-resolution FEM registration algorithm; (2) the FEM elastic method that uses variation information for mesh refinement; and (3) the robust block matching based registration. Comparisons among different methods in a dataset with 20 CT image pairs upon artificial deformation demonstrate that our registration method achieved significant improvement in accuracies. Experimental results in another dataset of 40 real medical image pairs for both mono-modal and multi-modal registrations also show that our model outperforms the other three models in its accuracy.

  10. Epidemic processes over adaptive state-dependent networks

    NASA Astrophysics Data System (ADS)

    Ogura, Masaki; Preciado, Victor M.

    2016-06-01

    In this paper we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting lower bound is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures.

  11. Efficient true-time-delay adaptive array processing

    NASA Astrophysics Data System (ADS)

    Wagner, Kelvin H.; Kraut, Shawn; Griffiths, Lloyd J.; Weaver, Samuel P.; Weverka, Robert T.; Sarto, Anthony W.

    1996-11-01

    We present a novel and efficient approach to true-time-delay (TTD) beamforming for large adaptive phased arrays with N elements, for application in radar, sonar, and communication. This broadband and efficient adaptive method for time-delay array processing algorithm decreases the number of tapped delay lines required for N-element arrays form N to only 2, producing an enormous savings in optical hardware, especially for large arrays. This new adaptive system provides the full NM degrees of freedom of a conventional N element time delay beamformer with M taps, each, enabling it to fully and optimally adapt to an arbitrary complex spatio-temporal signal environment that can contain broadband signals, noise, and narrowband and broadband jammers, all of which can arrive from arbitrary angles onto an arbitrarily shaped array. The photonic implementation of this algorithm uses index gratings produce in the volume of photorefractive crystals as the adaptive weights in a TTD beamforming network, 1 or 2 acousto-optic devices for signal injection, and 1 or 2 time-delay-and- integrate detectors for signal extraction. This approach achieves significant reduction in hardware complexity when compared to systems employing discrete RF hardware for the weights or when compared to alternative optical systems that typically use N channel acousto-optic deflectors.

  12. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  13. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  14. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  15. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. PMID:21719257

  16. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images.

  17. Thermodynamic Costs of Information Processing in Sensory Adaptation

    PubMed Central

    Sartori, Pablo; Granger, Léo; Lee, Chiu Fan; Horowitz, Jordan M.

    2014-01-01

    Biological sensory systems react to changes in their surroundings. They are characterized by fast response and slow adaptation to varying environmental cues. Insofar as sensory adaptive systems map environmental changes to changes of their internal degrees of freedom, they can be regarded as computational devices manipulating information. Landauer established that information is ultimately physical, and its manipulation subject to the entropic and energetic bounds of thermodynamics. Thus the fundamental costs of biological sensory adaptation can be elucidated by tracking how the information the system has about its environment is altered. These bounds are particularly relevant for small organisms, which unlike everyday computers, operate at very low energies. In this paper, we establish a general framework for the thermodynamics of information processing in sensing. With it, we quantify how during sensory adaptation information about the past is erased, while information about the present is gathered. This process produces entropy larger than the amount of old information erased and has an energetic cost bounded by the amount of new information written to memory. We apply these principles to the E. coli's chemotaxis pathway during binary ligand concentration changes. In this regime, we quantify the amount of information stored by each methyl group and show that receptors consume energy in the range of the information-theoretic minimum. Our work provides a basis for further inquiries into more complex phenomena, such as gradient sensing and frequency response. PMID:25503948

  18. Potential of hybrid adaptive filtering in inflammatory lesion detection from capsule endoscopy images

    PubMed Central

    Charisis, Vasileios S; Hadjileontiadis, Leontios J

    2016-01-01

    A new feature extraction technique for the detection of lesions created from mucosal inflammations in Crohn’s disease, based on wireless capsule endoscopy (WCE) images processing is presented here. More specifically, a novel filtering process, namely Hybrid Adaptive Filtering (HAF), was developed for efficient extraction of lesion-related structural/textural characteristics from WCE images, by employing Genetic Algorithms to the Curvelet-based representation of images. Additionally, Differential Lacunarity (DLac) analysis was applied for feature extraction from the HAF-filtered images. The resulted scheme, namely HAF-DLac, incorporates support vector machines for robust lesion recognition performance. For the training and testing of HAF-DLac, an 800-image database was used, acquired from 13 patients who undertook WCE examinations, where the abnormal cases were grouped into mild and severe, according to the severity of the depicted lesion, for a more extensive evaluation of the performance. Experimental results, along with comparison with other related efforts, have shown that the HAF-DLac approach evidently outperforms them in the field of WCE image analysis for automated lesion detection, providing higher classification results, up to 93.8% (accuracy), 95.2% (sensitivity), 92.4% (specificity) and 92.6% (precision). The promising performance of HAF-DLac paves the way for a complete computer-aided diagnosis system that could support physicians’ clinical practice.

  19. Image processing technique based on image understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-12-01

    Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.

  20. Coherence gated wavefront sensorless adaptive optics for two photon excited fluorescence retinal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Cua, Michelle; Bonora, Stefano; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.

    2016-03-01

    We present a novel system for adaptive optics two photon imaging. We utilize the bandwidth of the femtosecond excitation beam to perform coherence gated imaging (OCT) of the sample. The location of the focus is directly observable in the cross sectional OCT images, and adjusted to the desired depth plane. Next, using real time volumetric OCT, we perform Wavefront Sensorless Adaptive Optics (WSAO) aberration correction using a multi-element adaptive lens capable of correcting up to 4th order Zernike polynomials. The aberration correction is performed based on an image quality metric, for example intensity. The optimization time is limited only by the OCT acquisition rate, and takes ~30s. Following aberration correction, two photon fluorescence images are acquired, and compared to results without adaptive optics correction. This technique is promising for multiphoton imaging in multi-layered, scattering samples such as eye and brain, in which traditional wavefront sensing and guide-star sensorless adaptive optics approaches may not be suitable.

  1. Adaptive Optics for Satellite Imaging and Space Debris Ranging

    NASA Astrophysics Data System (ADS)

    Bennet, F.; D'Orgeville, C.; Price, I.; Rigaut, F.; Ritchie, I.; Smith, C.

    Earth's space environment is becoming crowded and at risk of a Kessler syndrome, and will require careful management for the future. Modern low noise high speed detectors allow for wavefront sensing and adaptive optics (AO) in extreme circumstances such as imaging small orbiting bodies in Low Earth Orbit (LEO). The Research School of Astronomy and Astrophysics (RSAA) at the Australian National University have been developing AO systems for telescopes between 1 and 2.5m diameter to image and range orbiting satellites and space debris. Strehl ratios in excess of 30% can be achieved for targets in LEO with an AO loop running at 2kHz, allowing the resolution of small features (<30cm) and the capability to determine object shape and spin characteristics. The AO system developed at RSAA consists of a high speed EMCCD Shack-Hartmann wavefront sensor, a deformable mirror (DM), and realtime computer (RTC), and an imaging camera. The system works best as a laser guide star system but will also function as a natural guide star AO system, with the target itself being the guide star. In both circumstances tip-tilt is provided by the target on the imaging camera. The fast tip-tilt modes are not corrected optically, and are instead removed by taking images at a moderate speed (>30Hz) and using a shift and add algorithm. This algorithm can also incorporate lucky imaging to further improve the final image quality. A similar AO system for space debris ranging is also in development in collaboration with Electro Optic Systems (EOS) and the Space Environment Management Cooperative Research Centre (SERC), at the Mount Stromlo Observatory in Canberra, Australia. The system is designed for an AO corrected upward propagated 1064nm pulsed laser beam, from which time of flight information is used to precisely range the target. A 1.8m telescope is used for both propagation and collection of laser light. A laser guide star, Shack-Hartmann wavefront sensor, and DM are used for high order

  2. Comparison of adaptive optics scanning light ophthalmoscopic fluorescein angiography and offset pinhole imaging.

    PubMed

    Chui, Toco Y P; Dubow, Michael; Pinhas, Alexander; Shah, Nishit; Gan, Alexander; Weitz, Rishard; Sulai, Yusufu N; Dubra, Alfredo; Rosen, Richard B

    2014-04-01

    Recent advances to the adaptive optics scanning light ophthalmoscope (AOSLO) have enabled finer in vivo assessment of the human retinal microvasculature. AOSLO confocal reflectance imaging has been coupled with oral fluorescein angiography (FA), enabling simultaneous acquisition of structural and perfusion images. AOSLO offset pinhole (OP) imaging combined with motion contrast post-processing techniques, are able to create a similar set of structural and perfusion images without the use of exogenous contrast agent. In this study, we evaluate the similarities and differences of the structural and perfusion images obtained by either method, in healthy control subjects and in patients with retinal vasculopathy including hypertensive retinopathy, diabetic retinopathy, and retinal vein occlusion. Our results show that AOSLO OP motion contrast provides perfusion maps comparable to those obtained with AOSLO FA, while AOSLO OP reflectance images provide additional information such as vessel wall fine structure not as readily visible in AOSLO confocal reflectance images. AOSLO OP offers a non-invasive alternative to AOSLO FA without the need for any exogenous contrast agent.

  3. Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking

    PubMed Central

    Ferguson, R. Daniel; Zhong, Zhangyi; Hammer, Daniel X.; Mujat, Mircea; Patel, Ankit H.; Deng, Cong; Zou, Weiyao; Burns, Stephen A.

    2010-01-01

    We have developed a new, unified implementation of the adaptive optics scanning laser ophthalmoscope (AOSLO) incorporating a wide-field line-scanning ophthalmoscope (LSO) and a closed-loop optical retinal tracker. AOSLO raster scans are deflected by the integrated tracking mirrors so that direct AOSLO stabilization is automatic during tracking. The wide-field imager and large-spherical-mirror optical interface design, as well as a large-stroke deformable mirror (DM), enable the AOSLO image field to be corrected at any retinal coordinates of interest in a field of >25 deg. AO performance was assessed by imaging individuals with a range of refractive errors. In most subjects, image contrast was measurable at spatial frequencies close to the diffraction limit. Closed-loop optical (hardware) tracking performance was assessed by comparing sequential image series with and without stabilization. Though usually better than 10 μm rms, or 0.03 deg, tracking does not yet stabilize to single cone precision but significantly improves average image quality and increases the number of frames that can be successfully aligned by software-based post-processing methods. The new optical interface allows the high-resolution imaging field to be placed anywhere within the wide field without requiring the subject to re-fixate, enabling easier retinal navigation and faster, more efficient AOSLO montage capture and stitching. PMID:21045887

  4. Image Tracking of Multiple C. Elegans Worms Using Adaptive Scanning Optical Microscope (ASOM)

    NASA Astrophysics Data System (ADS)

    Rivera, Linda; Potsaid, Benjamin; Wen, John T.

    2010-03-01

    Long-term imaging of living biological specimens is important to infer behaviorial trends and correlate neural structure with behavior. Such study is plagued by the field of view limitation in standard optical microscopes, as the motile specimen would frequently move out of view. A novel microscope, called the adaptive scanning optical microscope (ASOM), has recently been proposed to address this limitation. Through high speed post-objective scanning with a steering mirror, and compensation for optical aberrations with a MEMS deformable mirror, simultaneous imaging and tracking of multiple Caenorhabditis elegans worms has been demonstrated. This article presents the image processing algorithm for tracking multiple worms. Since the steering mirror has to move based on the predicted worm motion, image processing and stable steering mirror motion need to be executed at higher than the composite mosaic video frame rate (in contrast to existing works on image-based worm tracking, which are predominantly based on post-processing). Particular care is placed on disambiguating the worms when they overlap, collide, or entangle, where the worm tracking algorithm may fail. Results from both real time and simulated tracking are presented.

  5. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results. PMID:15062948

  6. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  7. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  8. Corn tassel detection based on image processing

    NASA Astrophysics Data System (ADS)

    Tang, Wenbing; Zhang, Yane; Zhang, Dongxing; Yang, Wei; Li, Minzan

    2012-01-01

    Machine vision has been widely applied in facility agriculture, and played an important role in obtaining environment information. In this paper, it is studied that application of image processing to recognize and locate corn tassel for corn detasseling machine. The corn tassel identification and location method was studied based on image processing and automated technology guidance information was provided for the actual production of corn emasculation operation. The system is the application of image processing to recognize and locate corn tassel for corn detasseling machine. According to the color characteristic of corn tassel, image processing techniques was applied to identify corn tassel of the images under HSI color space and Image segmentation was applied to extract the part of corn tassel, the feature of corn tassel was analyzed and extracted. Firstly, a series of preprocessing procedures were done. Then, an image segmentation algorithm based on HSI color space was develop to extract corn tassel from background and region growing method was proposed to recognize the corn tassel. The results show that this method could be effective for extracting corn tassel parts from the collected picture and can be used for corn tassel location information; this result could provide theoretical basis guidance for corn intelligent detasseling machine.

  9. Adoption: biological and social processes linked to adaptation.

    PubMed

    Grotevant, Harold D; McDermott, Jennifer M

    2014-01-01

    Children join adoptive families through domestic adoption from the public child welfare system, infant adoption through private agencies, and international adoption. Each pathway presents distinctive developmental opportunities and challenges. Adopted children are at higher risk than the general population for problems with adaptation, especially externalizing, internalizing, and attention problems. This review moves beyond the field's emphasis on adoptee-nonadoptee differences to highlight biological and social processes that affect adaptation of adoptees across time. The experience of stress, whether prenatal, postnatal/preadoption, or during the adoption transition, can have significant impacts on the developing neuroendocrine system. These effects can contribute to problems with physical growth, brain development, and sleep, activating cascading effects on social, emotional, and cognitive development. Family processes involving contact between adoptive and birth family members, co-parenting in gay and lesbian adoptive families, and racial socialization in transracially adoptive families affect social development of adopted children into adulthood.

  10. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  11. Low-latency adaptive optics system processing electronics

    NASA Astrophysics Data System (ADS)

    Duncan, Terry S.; Voas, Joshua K.; Eager, Robert J.; Newey, Scott C.; Wynia, John L.

    2003-02-01

    Extensive system modeling and analysis clearly shows that system latency is a primary performance driver in closed loop adaptive optical systems. With careful attention to all sensing, processing, and controlling components, system latency can be significantly reduced. Upgrades to the Starfire Optical Range (SOR) 3.5-meter telescope facility adaptive optical system have resulted in a reduction in overall latency from 660 μsec to 297 μsec. Future efforts will reduce the system latency even more to the 170 msec range. The changes improve system bandwidth significantly by reducing the "age" of the correction that is applied to the deformable mirror. Latency reductions have been achieved by increasing the pixel readout pattern and rate on the wavefront sensor, utilizing a new high-speed field programmable gate array (FPGA) based wavefront processor, doubling the processing rate of the real-time reconstructor, and streamlining the operation of the deformable mirror drivers.

  12. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  13. High Resolution Near-Infrared Imaging with Tip - Adaptive Optics.

    NASA Astrophysics Data System (ADS)

    Close, Laird Miller

    1995-01-01

    The development and design of the first operational tip-tilt Cassegrain secondary mirror are presented. This system, FASTTRAC, samples image motion at up to 50 Hz by tracking either infrared (m_{k } <=q 11) or visible (mR <=q 16) guide stars up to 30" and 90" away from the science target respectively. The Steward Observatory 2.3m or 1.5m telescope secondaries act as rapid tip-tilt mirrors to stabilize image motion (<=q0.1" rms;~5 Hz -3 dB frequency) based on the motion of the guide star. FASTTRAC obtains nearly diffraction-limited resolutions in seeing conditions where D/r_circ < 4 in agreement with theoretical expectations. FASTTRAC's unique ability to guide on infrared stars has allowed the first adaptively corrected images of the heavily extincted Galactic Center to be obtained. Over a hundred excellent (0.28" < FWHM < 0.6") images have been obtained of this region. These images do not detect any long term variations in the massive black hole candidate Sgr A*'s luminosity from June 1993 to September 1995. The average infrared magnitudes observed are K = 12.1 +/- 0.3, H = 13.7 +/- 0.3 and J = 16.6 +/- 0.4 integrated over 0.5" at the position of Sgr A*. No significant rapid periodicities were observed from Sgr A* for amplitudes >=q50% of the mean flux in the period range of 3-30 minutes. It is confirmed in the latest 0.28" FWHM image that there is 0.5" "bar" of emission running East-West at the position of Sgr A* as was earlier seen by Eckart et al. 1993. The observed fluxes are consistent with an inclined accretion disk around a ~1 times 10^6 M _odot black hole. However, they are also explained by a line of hot luminous (integrated luminosity of ~10^{3.5 -4.6}L_odot) central cluster stars positionally coincident with Sgr A* naturally explaining the observed 0.5" "bar". High-resolution images with FASTTRAC guiding on a faint (R = 16) visible guide star, combined with spectra from the MMT, have shown that IRAS FSC 10214 + 4724 (z = 2.28) gains its uniquely large

  14. Self-Adaptive Image Reconstruction Inspired by Insect Compound Eye Mechanism

    PubMed Central

    Zhang, Jiahua; Shi, Aiye; Wang, Xin; Bian, Linjie; Huang, Fengchen; Xu, Lizhong

    2012-01-01

    Inspired by the mechanism of imaging and adaptation to luminosity in insect compound eyes (ICE), we propose an ICE-based adaptive reconstruction method (ARM-ICE), which can adjust the sampling vision field of image according to the environment light intensity. The target scene can be compressive, sampled independently with multichannel through ARM-ICE. Meanwhile, ARM-ICE can regulate the visual field of sampling to control imaging according to the environment light intensity. Based on the compressed sensing joint sparse model (JSM-1), we establish an information processing system of ARM-ICE. The simulation of a four-channel ARM-ICE system shows that the new method improves the peak signal-to-noise ratio (PSNR) and resolution of the reconstructed target scene under two different cases of light intensity. Furthermore, there is no distinct block effect in the result, and the edge of the reconstructed image is smoother than that obtained by the other two reconstruction methods in this work. PMID:23365615

  15. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  16. Edge Detection on Images of Pseudoimpedance Section Supported by Context and Adaptive Transformation Model Images

    NASA Astrophysics Data System (ADS)

    Kawalec-Latała, Ewa

    2014-03-01

    Most of underground hydrocarbon storage are located in depleted natural gas reservoirs. Seismic survey is the most economical source of detailed subsurface information. The inversion of seismic section for obtaining pseudoacoustic impedance section gives the possibility to extract detailed subsurface information. The seismic wavelet parameters and noise briefly influence the resolution. Low signal parameters, especially long signal duration time and the presence of noise decrease pseudoimpedance resolution. Drawing out from measurement or modelled seismic data approximation of distribution of acoustic pseuoimpedance leads us to visualisation and images useful to stratum homogeneity identification goal. In this paper, the improvement of geologic section image resolution by use of minimum entropy deconvolution method before inversion is applied. The author proposes context and adaptive transformation of images and edge detection methods as a way to increase the effectiveness of correct interpretation of simulated images. In the paper, the edge detection algorithms using Sobel, Prewitt, Robert, Canny operators as well as Laplacian of Gaussian method are emphasised. Wiener filtering of image transformation improving rock section structure interpretation pseudoimpedance matrix on proper acoustic pseudoimpedance value, corresponding to selected geologic stratum. The goal of the study is to develop applications of image transformation tools to inhomogeneity detection in salt deposits.

  17. Construction and solution of an adaptive image-restoration model for removing blur and mixed noise

    NASA Astrophysics Data System (ADS)

    Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun

    2016-03-01

    We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.

  18. Tone reproduction for high-dynamic range imaging based on adaptive filtering

    NASA Astrophysics Data System (ADS)

    Ha, Changwoo; Lee, Joohyun; Jeong, Jechang

    2014-03-01

    A tone reproduction algorithm with enhanced contrast of high-dynamic range images on conventional low-dynamic range display devices is presented. The proposed algorithm consists mainly of block-based parameter estimation, a characteristic-based luminance adjustment, and an adaptive Gaussian filter using minimum description length. Instead of relying only on the reduction of the dynamic range, a characteristic-based luminance adjustment process modifies the luminance values. The Gaussian-filtered luminance value is obtained from appropriate value of variance, and the contrast is then enhanced through the use of a relation between the adjusted luminance and Gaussian-filtered luminance values. In the final tone-reproduction process, the proposed algorithm combines color and luminance components in order to preserve the color consistency. The experimental results demonstrate that the proposed algorithm achieves a good subjective quality while enhancing the contrast of the image details.

  19. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  20. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  1. Palm print image processing with PCNN

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhao, Xianhong

    2010-08-01

    Pulse coupled neural networks (PCNN) is based on Eckhorn's model of cat visual cortex, and imitate mammals visual processing, and palm print has been found as a personal biological feature for a long history. This inspired us with the combination of them: a novel method for palm print processing is proposed, which includes pre-processing and feature extraction of palm print image using PCNN; then the feature of palm print image is used for identifying. Our experiment shows that a verification rate of 87.5% can be achieved at ideal condition. We also find that the verification rate decreases duo to rotate or shift of palm.

  2. Transaction recording in medical image processing

    NASA Astrophysics Data System (ADS)

    Riedel, Christian H.; Ploeger, Andreas; Onnasch, Dietrich G. W.; Mehdorn, Hubertus M.

    1999-07-01

    In medical image processing original image data on archive servers may absolutely not be modified directly. On the other hand, images from read-only devices like CD-ROM cannot be changed and saved on the same storage medium. In both cases the modified data have to be stored as a second version and large amounts of storage volume are needed. We avoid these problems by using a program which records only each transaction prescribed to images. Each transaction is stored and used for further utilization and for renewed submission of the modified data. Conventionally, every time an image is viewed or printed, the modified version has to be saved in addition to the recorded data, either automatically or by the user. Compared to these approaches which not only squander storage space but area also time consuming our program has the following and advantages: First, the original image data which may not be modified are protected against manipulation. Second, small amounts of storage volume and network range are needed. Third, approved image operations can be automated by macros derived from transaction recordings. Finally, operations on the original data can always be controlled and traced back. As the handling of images gets easier with this concept, security for original image data is granted.

  3. Adaptive optics images. III. 87 Kepler objects of interest

    SciTech Connect

    Dressing, Courtney D.; Dupree, Andrea K.; Adams, Elisabeth R.; Kulesa, Craig; McCarthy, Don

    2014-11-01

    The Kepler mission has revolutionized our understanding of exoplanets, but some of the planet candidates identified by Kepler may actually be astrophysical false positives or planets whose transit depths are diluted by the presence of another star. Adaptive optics images made with ARIES at the MMT of 87 Kepler Objects of Interest place limits on the presence of fainter stars in or near the Kepler aperture. We detected visual companions within 1'' for 5 stars, between 1'' and 2'' for 7 stars, and between 2'' and 4'' for 15 stars. For those systems, we estimate the brightness of companion stars in the Kepler bandpass and provide approximate corrections to the radii of associated planet candidates due to the extra light in the aperture. For all stars observed, we report detection limits on the presence of nearby stars. ARIES is typically sensitive to stars approximately 5.3 Ks magnitudes fainter than the target star within 1'' and approximately 5.7 Ks magnitudes fainter within 2'', but can detect stars as faint as ΔKs = 7.5 under ideal conditions.

  4. Patient-adaptive lesion metabolism analysis by dynamic PET images.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2012-01-01

    Dynamic PET imaging provides important spatial-temporal information for metabolism analysis of organs and tissues, and generates a great reference for clinical diagnosis and pharmacokinetic analysis. Due to poor statistical properties of the measurement data in low count dynamic PET acquisition and disturbances from surrounding tissues, identifying small lesions inside the human body is still a challenging issue. The uncertainties in estimating the arterial input function will also limit the accuracy and reliability of the metabolism analysis of lesions. Furthermore, the sizes of the patients and the motions during PET acquisition will yield mismatch against general purpose reconstruction system matrix, this will also affect the quantitative accuracy of metabolism analyses of lesions. In this paper, we present a dynamic PET metabolism analysis framework by defining a patient adaptive system matrix to improve the lesion metabolism analysis. Both patient size information and potential small lesions are incorporated by simulations of phantoms of different sizes and individual point source responses. The new framework improves the quantitative accuracy of lesion metabolism analysis, and makes the lesion identification more precisely. The requirement of accurate input functions is also reduced. Experiments are conducted on Monte Carlo simulated data set for quantitative analysis and validation, and on real patient scans for assessment of clinical potential. PMID:23286175

  5. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  6. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  7. Adapting high-resolution speckle imaging to moving targets and platforms

    SciTech Connect

    Carrano, C J; Brase, J M

    2004-02-05

    High-resolution surveillance imaging with apertures greater than a few inches over horizontal or slant paths at optical or infrared wavelengths will typically be limited by atmospheric aberrations. With static targets and static platforms, we have previously demonstrated near-diffraction limited imaging of various targets including personnel and vehicles over horizontal and slant paths ranging from less than a kilometer to many tens of kilometers using adaptations to bispectral speckle imaging techniques. Nominally, these image processing methods require the target to be static with respect to its background during the data acquisition since multiple frames are required. To obtain a sufficient number of frames and also to allow the atmosphere to decorrelate between frames, data acquisition times on the order of one second are needed. Modifications to the original imaging algorithm will be needed to deal with situations where there is relative target to background motion. In this paper, we present an extension of these imaging techniques to accommodate mobile platforms and moving targets.

  8. An efficient and self-adapted approach to the sharpening of color images.

    PubMed

    Kau, Lih-Jen; Lee, Tien-Lin

    2013-01-01

    An efficient approach to the sharpening of color images is proposed in this paper. For this, the image to be sharpened is first transformed to the HSV color model, and then only the channel of Value will be used for the process of sharpening while the other channels are left unchanged. We then apply a proposed edge detector and low-pass filter to the channel of Value to pick out pixels around boundaries. After that, those pixels detected as around edges or boundaries are adjusted so that the boundary can be sharpened, and those nonedge pixels are kept unaltered. The increment or decrement magnitude that is to be added to those edge pixels is determined in an adaptive manner based on global statistics of the image and local statistics of the pixel to be sharpened. With the proposed approach, the discontinuities can be highlighted while most of the original information contained in the image can be retained. Finally, the adjusted channel of Value and that of Hue and Saturation will be integrated to get the sharpened color image. Extensive experiments on natural images will be given in this paper to highlight the effectiveness and efficiency of the proposed approach. PMID:24348136

  9. Adaptive processes drive ecomorphological convergent evolution in antwrens (Thamnophilidae).

    PubMed

    Bravo, Gustavo A; Remsen, J V; Brumfield, Robb T

    2014-10-01

    Phylogenetic niche conservatism (PNC) and convergence are contrasting evolutionary patterns that describe phenotypic similarity across independent lineages. Assessing whether and how adaptive processes give origin to these patterns represent a fundamental step toward understanding phenotypic evolution. Phylogenetic model-based approaches offer the opportunity not only to distinguish between PNC and convergence, but also to determine the extent that adaptive processes explain phenotypic similarity. The Myrmotherula complex in the Neotropical family Thamnophilidae is a polyphyletic group of sexually dimorphic small insectivorous forest birds that are relatively homogeneous in size and shape. Here, we integrate a comprehensive species-level molecular phylogeny of the Myrmotherula complex with morphometric and ecological data within a comparative framework to test whether phenotypic similarity is described by a pattern of PNC or convergence, and to identify evolutionary mechanisms underlying body size and shape evolution. We show that antwrens in the Myrmotherula complex represent distantly related clades that exhibit adaptive convergent evolution in body size and divergent evolution in body shape. Phenotypic similarity in the group is primarily driven by their tendency to converge toward smaller body sizes. Differences in body size and shape across lineages are associated to ecological and behavioral factors.

  10. Image subband coding using context-based classification and adaptive quantization.

    PubMed

    Yoo, Y; Ortega, A; Yu, B

    1999-01-01

    Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.

  11. Fundamental concepts of digital image processing

    SciTech Connect

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  12. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  13. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  14. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  15. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  16. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  17. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  18. MEMS Deformable Mirrors for Adaptive Optics in Astronomical Imaging

    NASA Astrophysics Data System (ADS)

    Cornelissen, S.; Bierden, P. A.; Bifano, T.

    We report on the development of micro-electromechanical (MEMS) deformable mirrors designed for ground and space-based astronomical instruments intended for imaging extra-solar planets. Three different deformable mirror designs, a 1024 element continuous membrane (32x32), a 4096 element continuous membrane (64x64), and a 331 hexagonal segmented tip-tilt-piston are being produced for the Planet Imaging Concept Testbed Using a Rocket Experiment (PICTURE) program, the Gemini Planet Imaging Instrument, and the visible nulling coronograph developed at JPL for NASA's TPF mission, respectively. The design of these polysilicon, surface-micromachined MEMS deformable mirrors builds on technology that was pioneered at Boston University and has been used extensively to correct for ocular aberrations in retinal imaging systems and for compensation of atmospheric turbulence in free-space laser communication. These light-weight, low power deformable mirrors will have an active aperture of up to 25.2mm consisting of thin silicon membrane mirror supported by an array of 1024 to 4096 electrostatic actuators exhibiting no hysteresis and sub-nanometer repeatability. The continuous membrane deformable mirrors, coated with a highly reflective metal film, will be capable of up to 4μm of stroke, have a surface finish of <10nm RMS with a fill factor of 99.8%. The segmented device will have a range of motion of 1um of piston and a 600 arc-seconds of tip/tilt simultaneously and a surface finish of 1nm RMS. The individual mirror elements in this unique device, are designed such that they will maintain their flatness throughout the range of travel. New design features and fabrication processes are combined with a proven device architecture to achieve the desired performance and high reliability. Presented in this paper are device characteristic and performance results of these devices.

  19. Image gathering and processing - Information and fidelity

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.

    1985-01-01

    In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.

  20. Image processing for the Arcetri Solar Archive

    NASA Astrophysics Data System (ADS)

    Centrone, M.; Ermolli, I.; Giorgi, F.

    The modelling recently developed to "reconstruct" with high accuracy the measured Total Solar Irradiance (TSI) variations, based on semi-empirical atmosphere models and observed distribution of the solar magnetic regions, can be applied to "construct" the TSI variations back in time making use of observations stored on several historic photographic archives. However, the analyses of images obtained through these archives is not a straightforward task, because these images suffer of several defects originated by the acquisition techniques and the data storing. In this paper we summarize the processing applied to identify solar features on the images obtained by the digitization of the Arcetri solar archive.

  1. CCD architecture for spacecraft SAR image processing

    NASA Technical Reports Server (NTRS)

    Arens, W. E.

    1977-01-01

    A real-time synthetic aperture radar (SAR) image processing architecture amenable to future on-board spacecraft applications is currently under development. Using state-of-the-art charge-coupled device (CCD) technology, low cost and power are inherent features. Other characteristics include the ability to reprogram correlation reference functions, correct for range migration, and compensate for antenna beam pointing errors on the spacecraft in real time. The first spaceborne demonstration is scheduled to be flown as an experiment on a 1982 Shuttle imaging radar mission (SIR-B). This paper describes the architecture and implementation characteristics of this initial spaceborne CCD SAR image processor.

  2. Closed-loop adaptive optics using a CMOS image quality metric sensor

    NASA Astrophysics Data System (ADS)

    Ting, Chueh; Rayankula, Aditya; Giles, Michael K.; Furth, Paul M.

    2006-08-01

    When compared to a Shack-Hartmann sensor, a CMOS image sharpness sensor has the advantage of reduced complexity in a closed-loop adaptive optics system. It also has the potential to be implemented as a smart sensor using VLSI technology. In this paper, we present a novel adaptive optics testbed that uses a CMOS sharpness imager built in the New Mexico State University (NMSU) Electro-Optics Research Laboratory (EORL). The adaptive optics testbed, which includes a CMOS image quality metric sensor and a 37-channel deformable mirror, has the capability to rapidly compensate higher-order phase aberrations. An experimental performance comparison of the pinhole image sharpness feedback method and the CMOS imager is presented. The experimental data shows that the CMOS sharpness imager works well in a closed-loop adaptive optics system. Its overall performance is better than that of the pinhole method, and it has a fast response time.

  3. Alternative method for Hamilton-Jacobi PDEs in image processing

    NASA Astrophysics Data System (ADS)

    Lagoutte, A.; Salat, H.; Vachier, C.

    2011-03-01

    Multiscale signal analysis has been used since the early 1990s as a powerful tool for image processing, notably in the linear case. However, nonlinear PDEs and associated nonlinear operators have advantages over linear operators, notably preserving important features such as edges in images. In this paper, we focus on nonlinear Hamilton-Jacobi PDEs defined with adaptive speeds or, alternatively, on adaptive morphological fiters also called semi-flat morphological operators. Semi-flat morphology were instroduced by H. Heijmans and studied only in the case where the speed (or equivalently the filtering parameter) is a decreasing function of the luminance. It is proposed to extend the definition suggested by H. Heijmans in the case of non decreasing speeds. We also prove that a central property for defining morphological filters, that is the adjunction property, is preserved while dealing with our extended definitions. Finally experimental applications are presented on actual images, including connection of thin lines by semi-flat dilations and image filtering by semi-flat openings.

  4. Mathematical Morphology Techniques For Image Processing Applications In Biomedical Imaging

    NASA Astrophysics Data System (ADS)

    Bartoo, Grace T.; Kim, Yongmin; Haralick, Robert M.; Nochlin, David; Sumi, Shuzo M.

    1988-06-01

    Mathematical morphology operations allow object identification based on shape and are useful for grouping a cluster of small objects into one object. Because of these capabilities, we have implemented and evaluated this technique for our study of Alzheimer's disease. The microscopic hallmark of Alzheimer's disease is the presence of brain lesions known as neurofibrillary tangles and senile plaques. These lesions have distinct shapes compared to normal brain tissue. Neurofibrillary tangles appear as flame-shaped structures, whereas senile plaques appear as circular clusters of small objects. In order to quantitatively analyze the distribution of these lesions, we have developed and applied the tools of mathematical morphology on the Pixar Image Computer. As a preliminary test of the accuracy of the automatic detection algorithm, a study comparing computer and human detection of senile plaques was performed by evaluating 50 images from 5 different patients. The results of this comparison demonstrates that the computer counts correlate very well with the human counts (correlation coefficient = .81). Now that the basic algorithm has been shown to work, optimization of the software will be performed to improve its speed. Also future improvements such as local adaptive thresholding will be made to the image analysis routine to further improve the systems accuracy.

  5. Industrial Holography Combined With Image Processing

    NASA Astrophysics Data System (ADS)

    Schorner, J.; Rottenkolber, H.; Roid, W.; Hinsch, K.

    1988-01-01

    Holographic test methods have gained to become a valuable tool for the engineer in research and development. But also in the field of non-destructive quality control holographic test equipment is now accepted for tests within the production line. The producer of aircraft tyres e. g. are using holographic tests to prove the guarantee of their tyres. Together with image processing the whole test cycle is automatisized. The defects within the tyre are found automatically and are listed on an outprint. The power engine industry is using holographic vibration tests for the optimization of their constructions. In the plastics industry tanks, wheels, seats and fans are tested holographically to find the optimum of shape. The automotive industry makes holography a tool for noise reduction. Instant holography and image processing techniques for quantitative analysis have led to an economic application of holographic test methods. New developments of holographic units in combination with image processing are presented.

  6. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  7. Adaptive box filters for removal of random noise from digital images

    USGS Publications Warehouse

    Eliason, E.M.; McEwen, A.S.

    1990-01-01

    We have developed adaptive box-filtering algorithms to (1) remove random bit errors (pixel values with no relation to the image scene) and (2) smooth noisy data (pixels related to the image scene but with an additive or multiplicative component of noise). For both procedures, we use the standard deviation (??) of those pixels within a local box surrounding each pixel, hence they are adaptive filters. This technique effectively reduces speckle in radar images without eliminating fine details. -from Authors

  8. On Cognition, Structured Sequence Processing, and Adaptive Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Petersson, Karl Magnus

    2008-11-01

    Cognitive neuroscience approaches the brain as a cognitive system: a system that functionally is conceptualized in terms of information processing. We outline some aspects of this concept and consider a physical system to be an information processing device when a subclass of its physical states can be viewed as representational/cognitive and transitions between these can be conceptualized as a process operating on these states by implementing operations on the corresponding representational structures. We identify a generic and fundamental problem in cognition: sequentially organized structured processing. Structured sequence processing provides the brain, in an essential sense, with its processing logic. In an approach addressing this problem, we illustrate how to integrate levels of analysis within a framework of adaptive dynamical systems. We note that the dynamical system framework lends itself to a description of asynchronous event-driven devices, which is likely to be important in cognition because the brain appears to be an asynchronous processing system. We use the human language faculty and natural language processing as a concrete example through out.

  9. Digital image database processing to simulate image formation in ideal lighting conditions of the human eye

    NASA Astrophysics Data System (ADS)

    Castañeda-Santos, Jessica; Santiago-Alvarado, Agustin; Cruz-Félix, Angel S.; Hernández-Méndez, Arturo

    2015-09-01

    The pupil size of the human eye has a large effect in the image quality due to inherent aberrations. Several studies have been performed to calculate its size relative to the luminance as well as considering other factors, i.e., age, size of the adapting field and mono and binocular vision. Moreover, ideal lighting conditions are known, but software suited to our specific requirements, low cost and low computational consumption, in order to simulate radiation adaptation and image formation in the retina with ideal lighting conditions has not yet been developed. In this work, a database is created consisting of 70 photographs corresponding to the same scene with a fixed target at different times of the day. By using this database, characteristics of the photographs are obtained by measuring the luminance average initial threshold value of each photograph by means of an image histogram. Also, we present the implementation of a digital filter for both, image processing on the threshold values of our database and generating output images with the threshold values reported for the human eye in ideal cases. Some potential applications for this kind of filters may be used in artificial vision systems.

  10. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  11. Adaptive femtosecond control using feedback from three-dimensional momentum images

    NASA Astrophysics Data System (ADS)

    Wells, E.

    2011-05-01

    Shaping ultrafast laser pulses using adaptive feedback is a proven technique for manipulating dynamics in molecular systems with no readily apparent control mechanism. Commonly employed feedback signals include fluorescence or ion yield, which may not uniquely identify the final state. Raw velocity map images, which contain a two-dimensional representation of the full three-dimensional photofragment momentum vector, are a more specific feedback source. The raw images, however, are limited by an azimuthal ambiguity which is usually removed in offline processing. By implementing a rapid inversion procedure based upon the onion-peeling technique, we are able to incorporate three-dimensional momentum information directly into the adaptive control loop. This method enables more targeted control experiments. Two examples are used to demonstrate the utility of this feedback. First, double ionization of CO produces C+ and O+ fragments ejected both perpendicular and parallel to the laser polarization with kinetic energy release of ~6 eV. Both suppression and enhancement of the perpendicular transitions relative to the parallel transitions are demonstrated. Second, double ionization of acetylene can lead to both HCCH2+ and HHCC2+ isomers. We select between these outcomes using the angular information contained in the CH+ and CH2+images. Supported by National Science Foundation award PHY-0969687 and the Chemical Sciences, Geosciences, and Biosciences Division, Office of Basic Energy Science, Office of Science, US Department of Energy.

  12. Molecular PET imaging for biology-guided adaptive radiotherapy of head and neck cancer.

    PubMed

    Hoeben, Bianca A W; Bussink, Johan; Troost, Esther G C; Oyen, Wim J G; Kaanders, Johannes H A M

    2013-10-01

    Integration of molecular imaging PET techniques into therapy selection strategies and radiation treatment planning for head and neck squamous cell carcinoma (HNSCC) can serve several purposes. First, pre-treatment assessments can steer decisions about radiotherapy modifications or combinations with other modalities. Second, biology-based objective functions can be introduced to the radiation treatment planning process by co-registration of molecular imaging with planning computed tomography (CT) scans. Thus, customized heterogeneous dose distributions can be generated with escalated doses to tumor areas where radiotherapy resistance mechanisms are most prevalent. Third, monitoring of temporal and spatial variations in these radiotherapy resistance mechanisms early during the course of treatment can discriminate responders from non-responders. With such information available shortly after the start of treatment, modifications can be implemented or the radiation treatment plan can be adapted tailing the biological response pattern. Currently, these strategies are in various phases of clinical testing, mostly in single-center studies. Further validation in multicenter set-up is needed. Ultimately, this should result in availability for routine clinical practice requiring stable production and accessibility of tracers, reproducibility and standardization of imaging and analysis methods, as well as general availability of knowledge and expertise. Small studies employing adaptive radiotherapy based on functional dynamics and early response mechanisms demonstrate promising results. In this context, we focus this review on the widely used PET tracer (18)F-FDG and PET tracers depicting hypoxia and proliferation; two well-known radiation resistance mechanisms.

  13. Adaptive ocean acoustic processing for a shallow ocean experiment

    SciTech Connect

    Candy, J.V.; Sullivan, E.J.

    1995-07-19

    A model-based approach is developed to solve an adaptive ocean acoustic signal processing problem. Here we investigate the design of model-based identifier (MBID) for a normal-mode model developed from a shallow water ocean experiment and then apply it to a set of experimental data demonstrating the feasibility of this approach. In this problem we show how the processor can be structured to estimate the horizontal wave numbers directly from measured pressure sound speed thereby eliminating the need for synthetic aperture processing or a propagation model solution. Ocean acoustic signal processing has made great strides over the past decade necessitated by the development of quieter submarines and the recent proliferation of diesel powered vessels.

  14. Adaptive PCA based fault diagnosis scheme in imperial smelting process.

    PubMed

    Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin

    2014-09-01

    In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently.

  15. A general framework for adaptive processing of data structures.

    PubMed

    Frasconi, P; Gori, M; Sperduti, A

    1998-01-01

    A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured information. In particular, relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist. The general framework proposed in this paper can be regarded as an extension of both recurrent neural networks and hidden Markov models to the case of acyclic graphs. In particular we study the supervised learning problem as the problem of learning transductions from an input structured space to an output structured space, where transductions are assumed to admit a recursive hidden statespace representation. We introduce a graphical formalism for representing this class of adaptive transductions by means of recursive networks, i.e., cyclic graphs where nodes are labeled by variables and edges are labeled by generalized delay elements. This representation makes it possible to incorporate the symbolic and subsymbolic nature of data. Structures are processed by unfolding the recursive network into an acyclic graph called encoding network. In so doing, inference and learning algorithms can be easily inherited from the corresponding algorithms for artificial neural networks or probabilistic graphical model.

  16. [Super sweet corn hybrids adaptability for industrial processing. I freezing].

    PubMed

    Alfonzo, Braunnier; Camacho, Candelario; Ortiz de Bertorelli, Ligia; De Venanzi, Frank

    2002-09-01

    With the purpose of evaluating adaptability to the freezing process of super sweet corn sh2 hybrids Krispy King, Victor and 324, 100 cobs of each type were frozen at -18 degrees C. After 120 days of storage, their chemical, microbiological and sensorial characteristics were compared with a sweet corn su. Industrial quality of the process of freezing and length and number of rows in cobs were also determined. Results revealed yields above 60% in frozen corns. Length and number of rows in cobs were acceptable. Most of the chemical characteristics of super sweet hybrids were not different from the sweet corn assayed at the 5% significance level. Moisture content and soluble solids of hybrid Victor, as well as total sugars of hybrid 324 were statistically different. All sh2 corns had higher pH values. During freezing, soluble solids concentration, sugars and acids decreased whereas pH increased. Frozen cobs exhibited acceptable microbiological rank, with low activities of mesophiles and total coliforms, absence of psychrophiles and fecal coliforms, and an appreciable amount of molds. In conclusion, sh2 hybrids adapted with no problems to the freezing process, they had lower contents of soluble solids and higher contents of total sugars, which almost doubled the amount of su corn; flavor, texture, sweetness and appearance of kernels were also better. Hybrid Victor was preferred by the evaluating panel and had an outstanding performance due to its yield and sensorial characteristics. PMID:12448345

  17. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  18. Prediction and control of chaotic processes using nonlinear adaptive networks

    SciTech Connect

    Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  19. FLIPS: Friendly Lisp Image Processing System

    NASA Astrophysics Data System (ADS)

    Gee, Shirley J.

    1991-08-01

    The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.

  20. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-03-19

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.

  1. Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

    PubMed Central

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  2. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  3. Phase sensitive adaptive optics assisted SLO/OCT for retinal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Pircher, Michael; Felberer, Franz; Salas, Matthias; Haindl, Richard; Baumann, Bernhard; Wartak, Andreas; Hitzenberger, Christoph K.

    2016-03-01

    Adaptive optics (AO) is essential in order to visualize small structures such as cone and rod photoreceptors in the living human retina in vivo. By combining AO with optical coherence tomography (OCT) the axial resolution in the images can be further improved. OCT provides access to the phase of the light returning from the retina which allows a measurement of subtle length changes in the nanometer range. These occur for example during the renewal process of cone outer segments. We present an approach for measuring very small length changes using an extended AO scanning laser ophthalmoscope (SLO)/ OCT instrument. By adding a second OCT interferometer that shares the same sample arm as the first interferometer, phase sensitive measurements can be performed in the en-face imaging plane. Frame averaging decreases phase noise which greatly improves the precision in the measurement of associated length changes.

  4. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  5. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-08-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  6. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms. PMID:23286072

  7. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  8. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system.

  9. Frequency-shift low-pass filtering and least mean square adaptive filtering for ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Li, Chunyu; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    Ultrasound image quality enhancement is a problem of considerable interest in medical imaging modality and an ongoing challenge to date. This paper investigates a method based on frequency-shift low-pass filtering (FSLF) and least mean square adaptive filtering (LMSAF) for ultrasound image quality enhancement. FSLF is used for processing the ultrasound signal in the frequency domain, while LMSAPF in the time domain. Firstly, FSLF shifts the center frequency of the focused signal to zero. Then the real and imaginary part of the complex data are filtered respectively by finite impulse response (FIR) low-pass filter. Thus the information around the center frequency are retained while the undesired ones, especially background noises are filtered. Secondly, LMSAF multiplies the signals with an automatically adjusted weight vector to further eliminate the noises and artifacts. Through the combination of the two filters, the ultrasound image is expected to have less noises and artifacts and higher resolution, and contrast. The proposed method was verified with the RF data of the CIRS phantom 055A captured by SonixTouch DAQ system. Experimental results show that the background noises and artifacts can be efficiently restrained, the wire object has a higher resolution and the contrast ratio (CR) can be enhanced for about 12dB to 15dB at different image depth comparing to delay-and-sum (DAS).

  10. Quality evaluation of adaptive optical image based on DCT and Rényi entropy

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Li, Junwei; Wang, Jing; Deng, Rong; Dong, Yanbing

    2015-04-01

    The adaptive optical telescopes play a more and more important role in the detection system on the ground, and the adaptive optical images are so many that we need find a suitable method of quality evaluation to choose good quality images automatically in order to save human power. It is well known that the adaptive optical images are no-reference images. In this paper, a new logarithmic evaluation method based on the use of the discrete cosine transform(DCT) and Rényi entropy for the adaptive optical images is proposed. Through the DCT using one or two dimension window, the statistical property of Rényi entropy for images is studied. The different directional Rényi entropy maps of an input image containing different information content are obtained. The mean values of different directional Rényi entropy maps are calculated. For image quality evaluation, the different directional Rényi entropy and its standard deviation corresponding to region of interest is selected as an indicator for the anisotropy of the images. The standard deviation of different directional Rényi entropy is obtained as the quality evaluation value for adaptive optical image. Experimental results show the proposed method that the sorting quality matches well with the visual inspection.

  11. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  12. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  13. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  14. Polymer Solidification and Stabilization: Adaptable Processes for Atypical Wastes

    SciTech Connect

    Jensen, C.

    2007-07-01

    Vinyl Ester Styrene (VES) and Advanced Polymer Solidification (APS{sup TM}) processes are used to solidify, stabilize, and immobilize radioactive, pyrophoric and hazardous wastes at US Department of Energy (DOE) and Department of Defense (DOD) sites, and commercial nuclear facilities. A wide range of projects have been accomplished, including in situ immobilization of ion exchange resin and carbon filter media in decommissioned submarines; underwater solidification of zirconium and hafnium machining swarf; solidification of uranium chips; impregnation of depth filters; immobilization of mercury, lead and other hazardous wastes (including paint chips and blasting media); and in situ solidification of submerged demineralizers. Discussion of the adaptability of the VES and APS{sup TM} processes is timely, given the decommissioning work at government sites, and efforts by commercial nuclear plants to reduce inventories of one-of-a-kind wastes. The VES and APS{sup TM} media and processes are highly adaptable to a wide range of waste forms, including liquids, slurries, bead and granular media; as well as metal fines, particles and larger pieces. With the ability to solidify/stabilize liquid wastes using high-speed mixing; wet sludges and solids by low-speed mixing; or bead and granular materials through in situ processing, these polymer will produce a stable, rock-hard product that has the ability to sequester many hazardous waste components and create Class B and C stabilized waste forms for disposal. Technical assessment and approval of these solidification processes and final waste forms have been greatly simplified by exhaustive waste form testing, as well as multiple NRC and CRCPD waste form approvals. (authors)

  15. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  16. High performance computing for deformable image registration: towards a new paradigm in adaptive radiotherapy.

    PubMed

    Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D

    2008-08-01

    The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is

  17. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  18. Adaptation as process: the future of Darwinism and the legacy of Theodosius Dobzhansky.

    PubMed

    Depew, David J

    2011-03-01

    Conceptions of adaptation have varied in the history of genetic Darwinism depending on whether what is taken to be focal is the process of adaptation, adapted states of populations, or discrete adaptations in individual organisms. I argue that Theodosius Dobzhansky's view of adaptation as a dynamical process contrasts with so-called "adaptationist" views of natural selection figured as "design-without-a-designer" of relatively discrete, enumerable adaptations. Correlated with these respectively process and product oriented approaches to adaptive natural selection are divergent pictures of organisms themselves as developmental wholes or as "bundles" of adaptations. While even process versions of genetical Darwinism are insufficiently sensitive to the fact much of the variation on which adaptive selection works consists of changes in the timing, rate, or location of ontogenetic events, I argue that articulations of the Modern Synthesis influenced by Dobzhansky are more easily reconciled with the recent shift to evolutionary developmentalism than are versions that make discrete adaptations central.

  19. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  20. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  1. Bitplane Image Coding With Parallel Coefficient Processing.

    PubMed

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible.

  2. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  3. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing. PMID:11567193

  4. ACTIVE-EYES: an adaptive pixel-by-pixel image-segmentation sensor architecture for high-dynamic-range hyperspectral imaging.

    PubMed

    Christensen, Marc P; Euliss, Gary W; McFadden, Michael J; Coyle, Kevin M; Milojkovic, Predrag; Haney, Michael W; van der Gracht, Joeseph; Athale, Ravindra A

    2002-10-10

    The ACTIVE-EYES (adaptive control for thermal imagers via electro-optic elements to yield an enhanced sensor) architecture, an adaptive image-segmentation and processing architecture, based on digital micromirror (DMD) array technology, is described. The concept provides efficient front-end processing of multispectral image data by adaptively segmenting and routing portions of the scene data concurrently to an imager and a spectrometer. The goal is to provide a large reduction in the amount of data required to be sensed in a multispectral imager by means of preprocessing the data to extract the most useful spatial and spectral information during detection. The DMD array provides the flexibility to perform a wide range of spatial and spectral analyses on the scene data. The spatial and spectral processing for different portions of the input scene can be tailored in real time to achieve a variety of preprocessing functions. Since the detected intensity of individual pixels may be controlled, the spatial image can be analyzed with gain varied on a pixel-by-pixel basis to enhance dynamic range. Coarse or fine spectral resolution can be achieved in the spectrometer by use of dynamically controllable or addressable dispersion elements. An experimental prototype, which demonstrated the segmentation between an imager and a grating spectrometer, was demonstrated and shown to achieve programmable pixelated intensity control. An information theoretic analysis of the dynamic-range control aspect was conducted to predict the performance enhancements that might be achieved with this architecture. The results indicate that, with a properly configured algorithm, the concept achieves the greatest relative information recovery from a detected image when the scene is made up of a relatively large area of moderate-dynamic-range pixels and a relatively smaller area of strong pixels that would tend to saturate a conventional sensor. PMID:12389978

  5. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  6. Stokes vector analysis of adaptive optics images of the retina.

    PubMed

    Song, Hongxin; Zhao, Yanming; Qi, Xiaofeng; Chui, Yuenping Toco; Burns, Stephen A

    2008-01-15

    A high-resolution Stokes vector imaging polarimeter was developed to measure the polarization properties at the cellular level in living human eyes. The application of this cellular level polarimetric technique to in vivo retinal imaging has allowed us to measure depolarization in the retina and to improve the retinal image contrast of retinal structures based on their polarization properties. PMID:18197217

  7. Cytopathology whole slide images and adaptive tutorials for postgraduate pathology trainees: a randomized crossover trial.

    PubMed

    Van Es, Simone L; Kumar, Rakesh K; Pryor, Wendy M; Salisbury, Elizabeth L; Velan, Gary M

    2015-09-01

    To determine whether cytopathology whole slide images and virtual microscopy adaptive tutorials aid learning by postgraduate trainees, we designed a randomized crossover trial to evaluate the quantitative and qualitative impact of whole slide images and virtual microscopy adaptive tutorials compared with traditional glass slide and textbook methods of learning cytopathology. Forty-three anatomical pathology registrars were recruited from Australia, New Zealand, and Malaysia. Online assessments were used to determine efficacy, whereas user experience and perceptions of efficiency were evaluated using online Likert scales and open-ended questions. Outcomes of online assessments indicated that, with respect to performance, learning with whole slide images and virtual microscopy adaptive tutorials was equivalent to using traditional methods. High-impact learning, efficiency, and equity of learning from virtual microscopy adaptive tutorials were strong themes identified in open-ended responses. Participants raised concern about the lack of z-axis capability in the cytopathology whole slide images, suggesting that delivery of z-stacked whole slide images online may be important for future educational development. In this trial, learning cytopathology with whole slide images and virtual microscopy adaptive tutorials was found to be as effective as and perceived as more efficient than learning from glass slides and textbooks. The use of whole slide images and virtual microscopy adaptive tutorials has the potential to provide equitable access to effective learning from teaching material of consistently high quality. It also has broader implications for continuing professional development and maintenance of competence and quality assurance in specialist practice.

  8. Adaptive Wavefront Calibration and Control for the Gemini Planet Imager

    SciTech Connect

    Poyneer, L A; Veran, J

    2007-02-02

    Quasi-static errors in the science leg and internal AO flexure will be corrected. Wavefront control will adapt to current atmospheric conditions through Fourier modal gain optimization, or the prediction of atmospheric layers with Kalman filtering.

  9. Positron imaging techniques for process engineering: recent developments at Birmingham

    NASA Astrophysics Data System (ADS)

    Parker, D. J.; Leadbeater, T. W.; Fan, X.; Hausard, M. N.; Ingram, A.; Yang, Z.

    2008-09-01

    For over 20 years the University of Birmingham has been using positron-emitting radioactive tracers to study engineering processes. The imaging technique of positron emission tomography (PET), widely used for medical applications, has been adapted for these studies, and the complementary technique of positron emission particle tracking (PEPT) has been developed. The radioisotopes are produced using the Birmingham MC40 cyclotron, and a variety of techniques are employed to produce suitable tracers in a wide range of forms. Detectors originally designed for medical use have been modified for engineering applications, allowing measurements to be made on real process equipment, at laboratory or pilot plant scale. This paper briefly reviews the capability of the techniques and introduces a few of the many processes to which they have been applied.

  10. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  11. AVES-IMCO: an adaptive optics visible spectrograph and imager/coronograph for NAOS

    NASA Astrophysics Data System (ADS)

    Beuzit, Jean-Luc; Lagrange, A.-M.; Mouillet, D.; Chauvin, G.; Stadler, E.; Charton, J.; Lacombe, F.; AVES-IMCO Team

    2001-05-01

    The NAOS adaptive optics system will very soon provide diffraction-limited images on the VLT, down to the visible wavelengths (0.020 arcseconds at 0.83 micron for instance). At the moment, the only instrument dedicated to NAOS is the CONICA spectro-imager, operating in the near-infrared from 1 to 5 microns. We are now proposing to ESO, in collaboration with an Italian group, the development of a visible spectrograph/imager/coronograph, AVES-IMCO (Adaptive Optics Visual Echelle Spectrograph and IMager/COronograph). We present here the general concept of the new instrument as well as its expected performances in the different modes.

  12. In vivo fluorescent imaging of the mouse retina using adaptive optics

    PubMed Central

    Biss, David P.; Sumorok, Daniel; Burns, Stephen A.; Webb, Robert H.; Zhou, Yaopeng; Bifano, Thomas G.; Côté, Daniel; Veilleux, Israel; Zamiri, Parisa; Lin, Charles P.

    2009-01-01

    In vivo imaging of the mouse retina using visible and near infrared wavelengths does not achieve diffraction-limited resolution due to wavefront aberrations induced by the eye. Considering the pupil size and axial dimension of the eye, it is expected that unaberrated imaging of the retina would have a transverse resolution of 2 μm. Higher-order aberrations in retinal imaging of human can be compensated for by using adaptive optics. We demonstrate an adaptive optics system for in vivo imaging of fluorescent structures in the retina of a mouse, using a microelectromechanical system membrane mirror and a Shack–Hartmann wavefront sensor that detects fluorescent wavefront. PMID:17308593

  13. Adaptive optics OCT using 1060nm swept source and dual deformable lenses for human retinal imaging

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Lee, Sujin; Cua, Michelle; Miao, Dongkai; Bonora, Stefano; Zawadzki, Robert J.; Sarunic, Marinko V.

    2016-03-01

    Adaptive optics concepts have been applied to the advancement of biological imaging and microscopy. In particular, AO has also been very successfully applied to cellular resolution imaging of the retina, enabling visualization of the characteristic mosaic patterns of the outer retinal layers using flood illumination fundus photography, Scanning Laser Ophthalmoscopy (SLO), and Optical Coherence Tomography (OCT). Despite the high quality of the in vivo images, there has been a limited uptake of AO imaging into the clinical environment. The high resolution afforded by AO comes at the price of limited field of view and specialized equipment. The implementation of a typical adaptive optics imaging system results in a relatively large and complex optical setup. The wavefront measurement is commonly performed using a Hartmann-Shack Wavefront Sensor (HS-WFS) placed at an image plane that is optically conjugated to the eye's pupil. The deformable mirror is also placed at a conjugate plane, relaying the wavefront corrections to the pupil. Due to the sensitivity of the HS-WFS to back-reflections, the imaging system is commonly constructed from spherical mirrors. In this project, we present a novel adaptive optics OCT retinal imaging system with significant potential to overcome many of the barriers to integration with a clinical environment. We describe in detail the implementation of a compact lens based wavefront sensorless adaptive optics (WSAO) 1060nm swept source OCT human retinal imaging system with dual deformable lenses, and present retinal images acquired in vivo from research volunteers.

  14. Adaptive random renormalization group classification of multiscale dispersive processes

    NASA Astrophysics Data System (ADS)

    Cushman, John; O'Malley, Dan

    2013-04-01

    Renormalization group operators provide a detailed classification tool for dispersive processes. We begin by reviewing a two-scale renormalization group classification scheme. Repeated application of one operator is associated with long time behavior of the process while repeated application of the other is associated with short time behavior. This approach is shown to be robust even in the presence of non-stationary increments and/or infinite second moments. Fixed points of the operators can be used for further sub classification of the processes when appropriate limits exist. As an example we look at advective dispersion in an ergodic velocity field. Let X(t) be a fixed point of the long-time renormalization group operator (RGO) RX(t)=X(rt)/r^p. Scaling laws for the probability density, mean first passage times, and finite-size Lyapunov exponents of such fixed points are reviewed in anticipation of more general results. A generalized RGO, Rp, where the exponent in R above is now a random variable is introduced. Scaling laws associated with these random RGOs (RRGOs) are demonstrated numerically and applied to a process modeling the transition from sub-dispersion to Fickian dispersion. The scaling laws for the RRGO are not simple power laws, but instead are a weighted average of power laws. The weighting in the scaling laws can be determined adaptively via Bayes' theorem.

  15. Model control of image processing for telerobotics and biomedical instrumentation

    NASA Astrophysics Data System (ADS)

    Nguyen, An Huu

    1993-06-01

    This thesis has model control of image processing (MCIP) as its major theme. By this it is meant that there is a top-down model approach which already knows the structure of the image to be processed. This top-down image processing under model control is used further as visual feedback to control robots and as feedforward information for biomedical instrumentation. The software engineering of the bioengineering instrumentation image processing is defined in terms of the task and the tools available. Early bottom-up image processing such as thresholding occurs only within the top-down control regions of interest (ROI's) or operating windows. Moment computation is an important bottom-up procedure as well as pyramiding to attain rapid computation, among other considerations in attaining programming efficiencies. A distinction is made between initialization procedures and stripped down run time operations. Even more detailed engineering design considerations are considered with respect to the ellipsoidal modeling of objects. Here the major axis orientation is an important additional piece of information, beyond the centroid moments. Careful analysis of various sources of errors and considerable benchmarking characterized the detailed considerations of the software engineering of the image processing procedures. Image processing for robotic control involves a great deal of 3D calibration of the robot working environment (RWE). Of special interest is the idea of adapting the machine scanpath to the current task. It was important to pay careful attention to the hardware aspects of the control of the toy robots that were used to demonstrate the general methodology. It was necessary to precalibrate the open loop gains for all motors so that after initialization the visual feedback, which depends on MCIP, would be able to supply enough information quickly enough to the control algorithms to govern the robots under a variety of control configurations and task operations

  16. Architecture for web-based image processing

    NASA Astrophysics Data System (ADS)

    Srini, Vason P.; Pini, David; Armstrong, Matt D.; Alalusi, Sayf H.; Thendean, John; Ueng, Sain-Zee; Bushong, David P.; Borowski, Erek S.; Chao, Elaine; Rabaey, Jan M.

    1997-09-01

    A computer systems architecture for processing medical images and other data coming over the Web is proposed. The architecture comprises a Java engine for communicating images over the Internet, storing data in local memory, doing floating point calculations, and a coprocessor MIMD parallel DSP for doing fine-grained operations found in video, graphics, and image processing applications. The local memory is shared between the Java engine and the parallel DSP. Data coming from the Web is stored in the local memory. This approach avoids the frequent movement of image data between a host processor's memory and an image processor's memory, found in many image processing systems. A low-power and high performance parallel DSP architecture containing lots of processors interconnected by a segmented hierarchical network has been developed. The instruction set of the 16-bit processor supports video, graphics, and image processing calculations. Two's complement arithmetic, saturation arithmetic, and packed instructions are supported. Higher data precision such as 32-bit and 64-bit can be achieved by cascading processors. A VLSI chip implementation of the architecture containing 64 processors organized in 16 clusters and interconnected by a statically programmable hierarchical bus is in progress. The buses are segmentable by programming switches on the bus. The instruction memory of each processor has sixteen 40-bit words. Data streaming through the processor is manipulated by the instructions. Multiple operations can be performed in a single cycle in a processor. A low-power handshake protocol is used for synchronization between the sender and the receiver of data. Temporary storage for data and filter coefficients is provided in each chip. A 256 by 16 memory unit is included in each of the 16 clusters. The memory unit can be used as a delay line, FIFO, lookup table or random access memory. The architecture is scalable with technology. Portable multimedia terminals like U

  17. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  18. Adaptation of the communicative brain to post-lingual deafness. Evidence from functional imaging.

    PubMed

    Lazard, Diane S; Innes-Brown, Hamish; Barone, Pascal

    2014-01-01

    Not having access to one sense profoundly modifies our interactions with the environment, in turn producing changes in brain organization. Deafness and its rehabilitation by cochlear implantation offer a unique model of brain adaptation during sensory deprivation and recovery. Functional imaging allows the study of brain plasticity as a function of the times of deafness and implantation. Even long after the end of the sensitive period for auditory brain physiological maturation, some plasticity may be observed. In this way the mature brain that becomes deaf after language acquisition can adapt to its modified sensory inputs. Oral communication difficulties induced by post-lingual deafness shape cortical reorganization of brain networks already specialized for processing oral language. Left hemisphere language specialization tends to be more preserved than functions of the right hemisphere. We hypothesize that the right hemisphere offers cognitive resources re-purposed to palliate difficulties in left hemisphere speech processing due to sensory and auditory memory degradation. If cochlear implantation is considered, this reorganization during deafness may influence speech understanding outcomes positively or negatively. Understanding brain plasticity during post-lingual deafness should thus inform the development of cognitive rehabilitation, which promotes positive reorganization of the brain networks that process oral language before surgery. This article is part of a Special Issue entitled Human Auditory Neuroimaging. PMID:23973562

  19. Adaptation of the communicative brain to post-lingual deafness. Evidence from functional imaging.

    PubMed

    Lazard, Diane S; Innes-Brown, Hamish; Barone, Pascal

    2014-01-01

    Not having access to one sense profoundly modifies our interactions with the environment, in turn producing changes in brain organization. Deafness and its rehabilitation by cochlear implantation offer a unique model of brain adaptation during sensory deprivation and recovery. Functional imaging allows the study of brain plasticity as a function of the times of deafness and implantation. Even long after the end of the sensitive period for auditory brain physiological maturation, some plasticity may be observed. In this way the mature brain that becomes deaf after language acquisition can adapt to its modified sensory inputs. Oral communication difficulties induced by post-lingual deafness shape cortical reorganization of brain networks already specialized for processing oral language. Left hemisphere language specialization tends to be more preserved than functions of the right hemisphere. We hypothesize that the right hemisphere offers cognitive resources re-purposed to palliate difficulties in left hemisphere speech processing due to sensory and auditory memory degradation. If cochlear implantation is considered, this reorganization during deafness may influence speech understanding outcomes positively or negatively. Understanding brain plasticity during post-lingual deafness should thus inform the development of cognitive rehabilitation, which promotes positive reorganization of the brain networks that process oral language before surgery. This article is part of a Special Issue entitled Human Auditory Neuroimaging.

  20. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  1. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  2. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  3. Non-linear, adaptive array processing for acoustic interference suppression.

    PubMed

    Hoppe, Elizabeth; Roan, Michael

    2009-06-01

    A method is introduced where blind source separation of acoustical sources is combined with spatial processing to remove non-Gaussian, broadband interferers from space-time displays such as bearing track recorder displays. This differs from most standard techniques such as generalized sidelobe cancellers in that the separation of signals is not done spatially. The algorithm performance is compared to adaptive beamforming techniques such as minimum variance distortionless response beamforming. Simulations and experiments using two acoustic sources were used to verify the performance of the algorithm. Simulations were also used to determine the effectiveness of the algorithm under various signal to interference, signal to noise, and array geometry conditions. A voice activity detection algorithm was used to benchmark the performance of the source isolation.

  4. Analysis of physical processes via imaging vectors

    NASA Astrophysics Data System (ADS)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  5. Low-Light Image Enhancement Using Adaptive Digital Pixel Binning

    PubMed Central

    Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki

    2015-01-01

    This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609

  6. Low-Light Image Enhancement Using Adaptive Digital Pixel Binning.

    PubMed

    Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki

    2015-01-01

    This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609

  7. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  8. Novel image processing approach to detect malaria

    NASA Astrophysics Data System (ADS)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  9. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  10. Detecting content adaptive scaling of images for forensic applications

    NASA Astrophysics Data System (ADS)

    Fillion, Claude; Sharma, Gaurav

    2010-01-01

    Content-aware resizing methods have recently been developed, among which, seam-carving has achieved the most widespread use. Seam-carving's versatility enables deliberate object removal and benign image resizing, in which perceptually important content is preserved. Both types of modifications compromise the utility and validity of the modified images as evidence in legal and journalistic applications. It is therefore desirable that image forensic techniques detect the presence of seam-carving. In this paper we address detection of seam-carving for forensic purposes. As in other forensic applications, we pose the problem of seam-carving detection as the problem of classifying a test image in either of two classes: a) seam-carved or b) non-seam-carved. We adopt a pattern recognition approach in which a set of features is extracted from the test image and then a Support Vector Machine based classifier, trained over a set of images, is utilized to estimate which of the two classes the test image lies in. Based on our study of the seam-carving algorithm, we propose a set of intuitively motivated features for the detection of seam-carving. Our methodology for detection of seam-carving is then evaluated over a test database of images. We demonstrate that the proposed method provides the capability for detecting seam-carving with high accuracy. For images which have been reduced 30% by benign seam-carving, our method provides a classification accuracy of 91%.

  11. Sorting Olive Batches for the Milling Process Using Image Processing

    PubMed Central

    Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  12. Sorting Olive Batches for the Milling Process Using Image Processing.

    PubMed

    Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  13. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  14. Multi-focus image fusion algorithm based on adaptive PCNN and wavelet transform

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-guo; Wang, Ming-jia; Han, Guang-liang

    2011-08-01

    Being an efficient method of information fusion, image fusion has been used in many fields such as machine vision, medical diagnosis, military applications and remote sensing. In this paper, Pulse Coupled Neural Network (PCNN) is introduced in this research field for its interesting properties in image processing, including segmentation, target recognition et al. and a novel algorithm based on PCNN and Wavelet Transform for Multi-focus image fusion is proposed. First, the two original images are decomposed by wavelet transform. Then, based on the PCNN, a fusion rule in the Wavelet domain is given. This algorithm uses the wavelet coefficient in each frequency domain as the linking strength, so that its value can be chosen adaptively. Wavelet coefficients map to the range of image gray-scale. The output threshold function attenuates to minimum gray over time. Then all pixels of image get the ignition. So, the output of PCNN in each iteration time is ignition wavelet coefficients of threshold strength in different time. At this moment, the sequences of ignition of wavelet coefficients represent ignition timing of each neuron. The ignition timing of PCNN in each neuron is mapped to corresponding image gray-scale range, which is a picture of ignition timing mapping. Then it can judge the targets in the neuron are obvious features or not obvious. The fusion coefficients are decided by the compare-selection operator with the firing time gradient maps and the fusion image is reconstructed by wavelet inverse transform. Furthermore, by this algorithm, the threshold adjusting constant is estimated by appointed iteration number. Furthermore, In order to sufficient reflect order of the firing time, the threshold adjusting constant αΘ is estimated by appointed iteration number. So after the iteration achieved, each of the wavelet coefficient is activated. In order to verify the effectiveness of proposed rules, the experiments upon Multi-focus image are done. Moreover

  15. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    PubMed

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation.

  16. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    PubMed

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation. PMID:26715411

  17. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  18. High-resolution adaptive imaging of a single atom

    NASA Astrophysics Data System (ADS)

    Wong-Campos, J. D.; Johnson, K. G.; Neyenhuis, B.; Mizrahi, J.; Monroe, C.

    2016-09-01

    Optical imaging systems are used extensively in the life and physical sciences because of their ability to non-invasively capture details on the microscopic and nanoscopic scales. Such systems are often limited by source or detector noise, image distortions and human operator misjudgement. Here, we report a general, quantitative method to analyse and correct these errors. We use this method to identify and correct optical aberrations in an imaging system for single atoms and realize an atomic position sensitivity of ∼0.5 nm Hz‑1/2 with a minimum uncertainty of 1.7 nm, allowing the direct imaging of atomic motion. This is the highest position sensitivity ever measured for an isolated atom and opens up the possibility of performing out-of-focus three-dimensional particle tracking, imaging of atoms in three-dimensional optical lattices or sensing forces at the yoctonewton (10‑24 N) scale.

  19. High-resolution adaptive imaging of a single atom

    NASA Astrophysics Data System (ADS)

    Wong-Campos, J. D.; Johnson, K. G.; Neyenhuis, B.; Mizrahi, J.; Monroe, C.

    2016-09-01

    Optical imaging systems are used extensively in the life and physical sciences because of their ability to non-invasively capture details on the microscopic and nanoscopic scales. Such systems are often limited by source or detector noise, image distortions and human operator misjudgement. Here, we report a general, quantitative method to analyse and correct these errors. We use this method to identify and correct optical aberrations in an imaging system for single atoms and realize an atomic position sensitivity of ˜0.5 nm Hz-1/2 with a minimum uncertainty of 1.7 nm, allowing the direct imaging of atomic motion. This is the highest position sensitivity ever measured for an isolated atom and opens up the possibility of performing out-of-focus three-dimensional particle tracking, imaging of atoms in three-dimensional optical lattices or sensing forces at the yoctonewton (10-24 N) scale.

  20. Fast and adaptive method for SAR superresolution imaging based on point scattering model and optimal basis selection.

    PubMed

    Wang, Zheng-ming; Wang, Wei-wei

    2009-07-01

    A novel fast and adaptive method for synthetic aperture radar (SAR) superresolution imaging is developed. Based on the point scattering model in the phase history domain, a dictionary is constructed so that the superresolution imaging process can be converted to a problem of sparse parameter estimation. The approximate orthogonality of this dictionary is exploited by theoretical derivation and experimental verification. Based on the orthogonality of the dictionary, we propose a fast algorithm for basis selection. Meanwhile, a threshold for obtaining the number and positions of the scattering centers is determined automatically from the inner product curves of the bases and observed data. Furthermore, the sensitivity of the threshold on estimation performance is analyzed. To reduce the burden of mass calculation and memory, a simplified superresolution imaging process is designed according to the characteristics of the imaging parameters. The experimental results of the simulated images and an MSTAR image illustrate the validity of this method and its robustness in the case of the high noise level. Compared with the traditional regularization method with the sparsity constraint, our proposed method suffers less computation complexity and has better adaptability.

  1. Determining the imaging plane of a retinal capillary layer in adaptive optical imaging

    NASA Astrophysics Data System (ADS)

    Yang, Le-Bao; Hu, Li-Fa; Li, Da-Yu; Cao, Zhao-Liang; Mu, Quan-Quan; Ma, Ji; Xuan, Li

    2016-09-01

    Even in the early stage, endocrine metabolism disease may lead to micro aneurysms in retinal capillaries whose diameters are less than 10 μm. However, the fundus cameras used in clinic diagnosis can only obtain images of vessels larger than 20 μm in diameter. The human retina is a thin and multiple layer tissue, and the layer of capillaries less than 10 μm in diameter only exists in the inner nuclear layer. The layer thickness of capillaries less than 10 μm in diameter is about 40 μm and the distance range to rod&cone cell surface is tens of micrometers, which varies from person to person. Therefore, determining reasonable capillary layer (CL) position in different human eyes is very difficult. In this paper, we propose a method to determine the position of retinal CL based on the rod&cone cell layer. The public positions of CL are recognized with 15 subjects from 40 to 59 years old, and the imaging planes of CL are calculated by the effective focal length of the human eye. High resolution retinal capillary imaging results obtained from 17 subjects with a liquid crystal adaptive optics system (LCAOS) validate our method. All of the subjects’ CLs have public positions from 127 μm to 147 μm from the rod&cone cell layer, which is influenced by the depth of focus. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194).

  2. Determining the imaging plane of a retinal capillary layer in adaptive optical imaging

    NASA Astrophysics Data System (ADS)

    Yang, Le-Bao; Hu, Li-Fa; Li, Da-Yu; Cao, Zhao-Liang; Mu, Quan-Quan; Ma, Ji; Xuan, Li

    2016-09-01

    Even in the early stage, endocrine metabolism disease may lead to micro aneurysms in retinal capillaries whose diameters are less than 10 μm. However, the fundus cameras used in clinic diagnosis can only obtain images of vessels larger than 20 μm in diameter. The human retina is a thin and multiple layer tissue, and the layer of capillaries less than 10 μm in diameter only exists in the inner nuclear layer. The layer thickness of capillaries less than 10 μm in diameter is about 40 μm and the distance range to rod&cone cell surface is tens of micrometers, which varies from person to person. Therefore, determining reasonable capillary layer (CL) position in different human eyes is very difficult. In this paper, we propose a method to determine the position of retinal CL based on the rod&cone cell layer. The public positions of CL are recognized with 15 subjects from 40 to 59 years old, and the imaging planes of CL are calculated by the effective focal length of the human eye. High resolution retinal capillary imaging results obtained from 17 subjects with a liquid crystal adaptive optics system (LCAOS) validate our method. All of the subjects’ CLs have public positions from 127 μm to 147 μm from the rod&cone cell layer, which is influenced by the depth of focus. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194).

  3. FITSH- a software package for image processing

    NASA Astrophysics Data System (ADS)

    Pál, András.

    2012-04-01

    In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package both provides utilities for the full pipeline of subsequent related data-processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple-image combinations, spatial transformations and interpolations) and aids the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The set of utilities found in this package is built on top of the commonly used UNIX/POSIX shells (hence the name of the package); therefore, both frequently used and well-documented tools for such environments can be exploited and managing a massive amount of data is rather convenient.

  4. Darkfield adapter for whole slide imaging: adapting a darkfield internal reflection illumination system to extend WSI applications.

    PubMed

    Kawano, Yoshihiro; Higgins, Christopher; Yamamoto, Yasuhito; Nyhus, Julie; Bernard, Amy; Dong, Hong-Wei; Karten, Harvey J; Schilling, Tobias

    2013-01-01

    We present a new method for whole slide darkfield imaging. Whole Slide Imaging (WSI), also sometimes called virtual slide or virtual microscopy technology, produces images that simultaneously provide high resolution and a wide field of observation that can encompass the entire section, extending far beyond any single field of view. For example, a brain slice can be imaged so that both overall morphology and individual neuronal detail can be seen. We extended the capabilities of traditional whole slide systems and developed a prototype system for darkfield internal reflection illumination (DIRI). Our darkfield system uses an ultra-thin light-emitting diode (LED) light source to illuminate slide specimens from the edge of the slide. We used a new type of side illumination, a variation on the internal reflection method, to illuminate the specimen and create a darkfield image. This system has four main advantages over traditional darkfield: (1) no oil condenser is required for high resolution imaging (2) there is less scatter from dust and dirt on the slide specimen (3) there is less halo, providing a more natural darkfield contrast image, and (4) the motorized system produces darkfield, brightfield and fluorescence images. The WSI method sometimes allows us to image using fewer stains. For instance, diaminobenzidine (DAB) and fluorescent staining are helpful tools for observing protein localization and volume in tissues. However, these methods usually require counter-staining in order to visualize tissue structure, limiting the accuracy of localization of labeled cells within the complex multiple regions of typical neurohistological preparations. Darkfield imaging works on the basis of light scattering from refractive index mismatches in the sample. It is a label-free method of producing contrast in a sample. We propose that adapting darkfield imaging to WSI is very useful, particularly when researchers require additional structural information without the use of

  5. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  6. Recasting Hope: a process of adaptation following fetal anomaly diagnosis.

    PubMed

    Lalor, Joan; Begley, Cecily M; Galavan, Eoin

    2009-02-01

    Recent decades have seen ultrasound revolutionise the management of pregnancy and its possible complications. However, somewhat less consideration has been given to the psychosocial consequences of mass screening resulting in fetal anomaly detection in low-risk populations, particularly in contexts where termination of pregnancy services are not readily accessible. A grounded theory study was conducted exploring forty-one women's experiences of ultrasound diagnosis of fetal abnormality up to and beyond the birth in the Republic of Ireland. Thirty-one women chose to continue the pregnancy and ten women accessed termination of pregnancy services outside the state. Data were collected using repeated in-depth individual interviews pre- and post-birth and analysed using the constant comparative method. Recasting Hope, the process of adaptation following diagnosis is represented temporally as four phases: 'Assume Normal', 'Shock', 'Gaining Meaning' and 'Rebuilding'. Some mothers expressed a sense of incredulity when informed of the anomaly and the 'Assume Normal' phase provides an improved understanding as to why women remain unprepared for an adverse diagnosis. Transition to phase 2, 'Shock,' is characterised by receiving the diagnosis and makes explicit women's initial reactions. Once the diagnosis is confirmed, a process of 'Gaining Meaning' commences, whereby an attempt to make sense of this ostensibly negative event begins. 'Rebuilding', the final stage in the process, is concerned with the extent to which women recover from the loss and resolve the inconsistency between their experience and their previous expectations of pregnancy in particular and beliefs in the world in general. This theory contributes to the theoretical field of thanatology as applied to the process of grieving associated with the loss of an ideal child. The framework of Recasting Hope is intended for use as a tool to assist health professionals through offering simple yet effective

  7. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  8. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    PubMed

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance

  9. Portable EDITOR (PEDITOR): A portable image processing system. [satellite images

    NASA Technical Reports Server (NTRS)

    Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.

    1986-01-01

    The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.

  10. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  11. Image processing on MPP-like arrays

    SciTech Connect

    Coletti, N.B.

    1983-01-01

    The desirability and suitability of using very large arrays of processors such as the Massively Parallel Processor (MPP) for processing remotely sensed images is investigated. The dissertation can be broken into two areas. The first area is the mathematical analysis of emultating the Bitonic Sorting Network on an array of processors. This sort is useful in histogramming images that have a very large number of pixel values (or gray levels). The optimal number of routing steps required to emulate a N = 2/sup k/ x 2/sup k/ element network on a 2/sup n/ x 2/sup n/ array (k less than or equal to n less than or equal to 7), provided each processor contains one element before and after every merge sequence, is proved to be 14 ..sqrt..N - 4log/sub 2/N - 14. Several already existing emulations achieve this lower bound. The number of elements sorted dictates a particular sorting network, and hence the number of routing steps. It is established that the cardinality N = 3/4 x 2/sup 2n/ elements used the absolute minimum routing steps, 8 ..sqrt..3 ..sqrt..N -4log/sub 2/N - (20 - 4log/sub 2/3). An algorithm achieving this bound is presented. The second area covers the implementations of the image processing tasks. In particular the histogramming of large numbers of gray-levels, geometric distortion determination and its efficient correction, fast Fourier transforms, and statistical clustering are investigated.

  12. Adaptive non-local means method for speckle reduction in ultrasound images

    NASA Astrophysics Data System (ADS)

    Ai, Ling; Ding, Mingyue; Zhang, Xuming

    2016-03-01

    Noise removal is a crucial step to enhance the quality of ultrasound images. However, some existing despeckling methods cannot ensure satisfactory restoration performance. In this paper, an adaptive non-local means (ANLM) filter is proposed for speckle noise reduction in ultrasound images. The distinctive property of the proposed method lies in that the decay parameter will not take the fixed value for the whole image but adapt itself to the variation of the local features in the ultrasound images. In the proposed method, the pre-filtered image will be obtained using the traditional NLM method. Based on the pre-filtered result, the local gradient will be computed and it will be utilized to determine the decay parameter adaptively for each image pixel. The final restored image will be produced by the ANLM method using the obtained decay parameters. Simulations on the synthetic image show that the proposed method can deliver sufficient speckle reduction while preserving image details very well and it outperforms the state-of-the-art despeckling filters in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Experiments on the clinical ultrasound image further demonstrate the practicality and advantage of the proposed method over the compared filtering methods.

  13. Development of the SOFIA Image Processing Tool

    NASA Technical Reports Server (NTRS)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  14. Image processing and the Arithmetic Fourier Transform

    SciTech Connect

    Tufts, D.W.; Fan, Z.; Cao, Z.

    1989-01-01

    A new Fourier technique, the Arithmetic Fourier Transform (AFT) was recently developed for signal processing. This approach is based on the number-theoretic method of Mobius inversion. The AFT needs only additions except for a small amount of multiplications by prescribed scale factors. This new algorithm is also well suited to parallel processing. And there is no accumulation of rounding errors in the AFT algorithm. In this reprint, the AFT is used to compute the discrete cosine transform and is also extended to 2-D cases for image processing. A 2-D Mobius inversion formula is proved. It is then applied to the computation of Fourier coefficients of a periodic 2-D function. It is shown that the output of an array of delay-line (or transversal) filters is the Mobius transform of the input harmonic terms. The 2-D Fourier coefficients can therefore be obtained through Mobius inversion of the output of the filter array.

  15. HYMOSS signal processing for pushbroom spectral imaging

    NASA Technical Reports Server (NTRS)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  16. HYMOSS signal processing for pushbroom spectral imaging

    NASA Astrophysics Data System (ADS)

    Ludwig, David E.

    1991-06-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  17. In vivo imaging of human retinal microvasculature using adaptive optics scanning light ophthalmoscope fluorescein angiography

    PubMed Central

    Pinhas, Alexander; Dubow, Michael; Shah, Nishit; Chui, Toco Y.; Scoles, Drew; Sulai, Yusufu N.; Weitz, Rishard; Walsh, Joseph B.; Carroll, Joseph; Dubra, Alfredo; Rosen, Richard B.

    2013-01-01

    The adaptive optics scanning light ophthalmoscope (AOSLO) allows visualization of microscopic structures of the human retina in vivo. In this work, we demonstrate its application in combination with oral and intravenous (IV) fluorescein angiography (FA) to the in vivo visualization of the human retinal microvasculature. Ten healthy subjects ages 20 to 38 years were imaged using oral (7 and/or 20 mg/kg) and/or IV (500 mg) fluorescein. In agreement with current literature, there were no adverse effects among the patients receiving oral fluorescein while one patient receiving IV fluorescein experienced some nausea and heaving. We determined that all retinal capillary beds can be imaged using clinically accepted fluorescein dosages and safe light levels according to the ANSI Z136.1-2000 maximum permissible exposure. As expected, the 20 mg/kg oral dose showed higher image intensity for a longer period of time than did the 7 mg/kg oral and the 500 mg IV doses. The increased resolution of AOSLO FA, compared to conventional FA, offers great opportunity for studying physiological and pathological vascular processes. PMID:24009994

  18. Adaptive technique for three-dimensional MR imaging of moving structures.

    PubMed

    Korin, H W; Felmlee, J P; Ehman, R L; Riederer, S J

    1990-10-01

    The authors describe an adaptive motion correction method for three-dimensional magnetic resonance (MR) imaging. Three-dimensional imaging offers many advantages over two-dimensional multisection imaging but is susceptible to image corruption due to motion. Thus, it has been of limited use in the imaging of mobile structures, and the relatively long imaging times required have hindered its use in patients who tend to move during imaging. The authors' technique uses interleaved "navigator" echoes to provide a measure of displacement for each image echo in the acquisition and then uses this information to allow correction of the image data. The theory for signal corruption due to motion and the correction scheme that follows from it are presented. This method can produce excellent results when the motion is correctly modeled.

  19. 3D segmentation of masses in DCE-MRI images using FCM and adaptive MRF

    NASA Astrophysics Data System (ADS)

    Zhang, Chengjie; Li, Lihua

    2014-03-01

    Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is a sensitive imaging modality for the detection of breast cancer. Automated segmentation of breast lesions in DCE-MRI images is challenging due to inherent signal-to-noise ratios and high inter-patient variability. A novel 3D segmentation method based on FCM and MRF is proposed in this study. In this method, a MRI image is segmented by spatial FCM, firstly. And then MRF segmentation is conducted to refine the result. We combined with the 3D information of lesion in the MRF segmentation process by using segmentation result of contiguous slices to constraint the slice segmentation. At the same time, a membership matrix of FCM segmentation result is used for adaptive adjustment of Markov parameters in MRF segmentation process. The proposed method was applied for lesion segmentation on 145 breast DCE-MRI examinations (86 malignant and 59 benign cases). An evaluation of segmentation was taken using the traditional overlap rate method between the segmented region and hand-drawing ground truth. The average overlap rates for benign and malignant lesions are 0.764 and 0.755 respectively. Then we extracted five features based on the segmentation region, and used an artificial neural network (ANN) to classify between malignant and benign cases. The ANN had a classification performance measured by the area under the ROC curve of AUC=0.73. The positive and negative predictive values were 0.86 and 0.58, respectively. The results demonstrate the proposed method not only achieves a better segmentation performance in accuracy also has a reasonable classification performance.

  20. Polarization information processing and software system design for simultaneously imaging polarimetry

    NASA Astrophysics Data System (ADS)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.