Science.gov

Sample records for adaptive image processing

  1. The role of the microprocessor in onboard image processing for the information adaptive system

    NASA Technical Reports Server (NTRS)

    Kelly, W. L., IV; Meredith, B. D.

    1980-01-01

    The preliminary design of the Information Adaptive System is presented. The role of the microprocessor in the implementation of the individual processing elements is discussed. Particular emphasis is placed on multispectral image data processing.

  2. Consortium for Adaptive Optics and Image Post-Processing

    DTIC Science & Technology

    2008-06-12

    optics bench laboratory is located in Kula , Maui, and is called “The Space Surveillance Simulator” (S-Cube). S-Cube is designed to simulate both the...Wheeler, Trex Maui Personnel from the Center for Adaptive Optics Contributed DURIP Maui Adaptive Optics Laboratory (S-Cube), Kula Setup Meeting (26...for Astronomy’s buildings in Kula , Maui. The move also caused a change in the scientists directly involved in the simulator as well as a change in

  3. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  4. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  5. Adaptive sidelobe reduction in SAR and INSAR COSMO-SkyMed image processing

    NASA Astrophysics Data System (ADS)

    Lorusso, Rino; Lombardi, Nunzia; Milillo, Giovanni

    2016-10-01

    The main lobe and the side lobes of strong scatterers are sometimes clearly visible in SAR images. Sidelobe reduction is of particular importance when imaging scenes contain objects such as ships and buildings having very large radar cross sections. Amplitude weighting is usually used to suppress sidelobes of the images at the expense of broadening of mainlobe, loss of resolution and degradation of SAR images. The Spatial Variant Apodization (SVA) is an Adaptive SideLobe Reduction (ASLR) technique that provides high effective suppression of sidelobes without broadening mainlobe. In this paper, we apply SVA to process COSMO-SkyMed (CSK) StripMap and Spotlight X-band data and compare the images with the standard products obtained via Hamming window processing. Different test sites have been selected in Italy, Argentina, California and Germany where corner reflectors are installed. Experimental results show clearly the resolution improvement (20%) while sidelobe kept to a low level when SVA processing is applied compared with Hamming windowing one. Then SVA technique is applied to Interferometric SAR image processing (INSAR) using a CSK StripMap interferometric tandem-like data pair acquired on East-California. The interferometric coherence of image pair obtained without sidelobe reduction (SCS_U) and with sidelobe reduction performed via Hamming window and via SVA are compared. High resolution interferometric products have been obtained with small variation of mean coherence when using ASLR products with respect to hamming windowed and no windowed one.

  6. Functional magnetic resonance imaging adaptation reveals the cortical networks for processing grasp-relevant object properties.

    PubMed

    Monaco, Simona; Chen, Ying; Medendorp, W P; Crawford, J D; Fiehler, Katja; Henriques, Denise Y P

    2014-06-01

    Grasping behaviors require the selection of grasp-relevant object dimensions, independent of overall object size. Previous neuroimaging studies found that the intraparietal cortex processes object size, but it is unknown whether the graspable dimension (i.e., grasp axis between selected points on the object) or the overall size of objects triggers activation in that region. We used functional magnetic resonance imaging adaptation to investigate human brain areas involved in processing the grasp-relevant dimension of real 3-dimensional objects in grasping and viewing tasks. Trials consisted of 2 sequential stimuli in which the object's grasp-relevant dimension, its global size, or both were novel or repeated. We found that calcarine and extrastriate visual areas adapted to object size regardless of the grasp-relevant dimension during viewing tasks. In contrast, the superior parietal occipital cortex (SPOC) and lateral occipital complex of the left hemisphere adapted to the grasp-relevant dimension regardless of object size and task. Finally, the dorsal premotor cortex adapted to the grasp-relevant dimension in grasping, but not in viewing, tasks, suggesting that motor processing was complete at this stage. Taken together, our results provide a complete cortical circuit for progressive transformation of general object properties into grasp-related responses.

  7. An adaptive threshold based image processing technique for improved glaucoma detection and classification.

    PubMed

    Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore

    2015-11-01

    Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant.

  8. Automatic ultrasonic imaging system with adaptive-learning-network signal-processing techniques

    SciTech Connect

    O'Brien, L.J.; Aravanis, N.A.; Gouge, J.R. Jr.; Mucciardi, A.N.; Lemon, D.K.; Skorpik, J.R.

    1982-04-01

    A conventional pulse-echo imaging system has been modified to operate with a linear ultrasonic array and associated digital electronics to collect data from a series of defects fabricated in aircraft quality steel blocks. A thorough analysis of the defect responses recorded with this modified system has shown that considerable improvements over conventional imaging approaches can be obtained in the crucial areas of defect detection and characterization. A combination of advanced signal processing concepts with the Adaptive Learning Network (ALN) methodology forms the basis for these improvements. Use of established signal processing algorithms such as temporal and spatial beam-forming in concert with a sophisticated detector has provided a reliable defect detection scheme which can be implemented in a microprocessor-based system to operate in an automatic mode.

  9. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  10. Ultrasound Nondestructive Evaluation (NDE) Imaging with Transducer Arrays and Adaptive Processing

    PubMed Central

    Li, Minghui; Hayward, Gordon

    2012-01-01

    This paper addresses the challenging problem of ultrasonic non-destructive evaluation (NDE) imaging with adaptive transducer arrays. In NDE applications, most materials like concrete, stainless steel and carbon-reinforced composites used extensively in industries and civil engineering exhibit heterogeneous internal structure. When inspected using ultrasound, the signals from defects are significantly corrupted by the echoes form randomly distributed scatterers, even defects that are much larger than these random reflectors are difficult to detect with the conventional delay-and-sum operation. We propose to apply adaptive beamforming to the received data samples to reduce the interference and clutter noise. Beamforming is to manipulate the array beam pattern by appropriately weighting the per-element delayed data samples prior to summing them. The adaptive weights are computed from the statistical analysis of the data samples. This delay-weight-and-sum process can be explained as applying a lateral spatial filter to the signals across the probe aperture. Simulations show that the clutter noise is reduced by more than 30 dB and the lateral resolution is enhanced simultaneously when adaptive beamforming is applied. In experiments inspecting a steel block with side-drilled holes, good quantitative agreement with simulation results is demonstrated. PMID:22368457

  11. Ultrasound nondestructive evaluation (NDE) imaging with transducer arrays and adaptive processing.

    PubMed

    Li, Minghui; Hayward, Gordon

    2012-01-01

    This paper addresses the challenging problem of ultrasonic non-destructive evaluation (NDE) imaging with adaptive transducer arrays. In NDE applications, most materials like concrete, stainless steel and carbon-reinforced composites used extensively in industries and civil engineering exhibit heterogeneous internal structure. When inspected using ultrasound, the signals from defects are significantly corrupted by the echoes form randomly distributed scatterers, even defects that are much larger than these random reflectors are difficult to detect with the conventional delay-and-sum operation. We propose to apply adaptive beamforming to the received data samples to reduce the interference and clutter noise. Beamforming is to manipulate the array beam pattern by appropriately weighting the per-element delayed data samples prior to summing them. The adaptive weights are computed from the statistical analysis of the data samples. This delay-weight-and-sum process can be explained as applying a lateral spatial filter to the signals across the probe aperture. Simulations show that the clutter noise is reduced by more than 30 dB and the lateral resolution is enhanced simultaneously when adaptive beamforming is applied. In experiments inspecting a steel block with side-drilled holes, good quantitative agreement with simulation results is demonstrated.

  12. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    SciTech Connect

    Druckmueller, M.

    2013-08-15

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  13. Stereoscopic adapter based system using HMD and image processing software for supporting inner ear operations performed using operating microscope

    NASA Astrophysics Data System (ADS)

    Leśniewski, Marcin; Kujawińska, Malgorzata; Kucharski, Tomasz; Niemczyk, Kazimierz

    2006-02-01

    Recently surgery requires extensive support from imaging technologies in order to increase effectiveness and safety of operations. One of important tasks is to enhance visualisation of quasi-phase (transparent) 3D structures. In this paper authors present a few of practical hardware solutions using of operational stereoscopic microscope with two image acquisition channels, stereoscopic adapter and Helmet Mounted Display (HMD) for stereoscopic visualization of operational field "in real time". Special attention is paid to the development of opto- mechanical unit. The authors focus on searching cheap, accurate and ergonomic solutions. A few proposals are analyzed: typical stereoscopic adapter with two image acquisition channels equipped with developed software for image low contrast enhancement for stereoscopic observation in stereoscopic HMD of operational field, visual - picture adapter (real operational view through microscope channels or processed operational field images observation in "real time").

  14. Adaptive passive fathometer processing.

    PubMed

    Siderius, Martin; Song, Heechun; Gerstoft, Peter; Hodgkiss, William S; Hursky, Paul; Harrison, Chris

    2010-04-01

    Recently, a technique has been developed to image seabed layers using the ocean ambient noise field as the sound source. This so called passive fathometer technique exploits the naturally occurring acoustic sounds generated on the sea-surface, primarily from breaking waves. The method is based on the cross-correlation of noise from the ocean surface with its echo from the seabed, which recovers travel times to significant seabed reflectors. To limit averaging time and make this practical, beamforming is used with a vertical array of hydrophones to reduce interference from horizontally propagating noise. The initial development used conventional beamforming, but significant improvements have been realized using adaptive techniques. In this paper, adaptive methods for this process are described and applied to several data sets to demonstrate improvements possible as compared to conventional processing.

  15. Adaptive Image Processing Methods for Improving Contaminant Detection Accuracy on Poultry Carcasses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Abstract A real-time multispectral imaging system has demonstrated a science-based tool for fecal and ingesta contaminant detection during poultry processing. In order to implement this imaging system at commercial poultry processing industry, the false positives must be removed. For doi...

  16. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.

  17. Adaptation Duration Dissociates Category-, Image-, and Person-Specific Processes on Face-Evoked Event-Related Potentials

    PubMed Central

    Zimmer, Márta; Zbanţ, Adriana; Németh, Kornél; Kovács, Gyula

    2015-01-01

    Several studies demonstrated that face perception is biased by the prior presentation of another face, a phenomenon termed as face-related after-effect (FAE). FAE is linked to a neural signal-reduction at occipito-temporal areas and it can be observed in the amplitude modulation of the early event-related potential (ERP) components. Recently, macaque single-cell recording studies suggested that manipulating the duration of the adaptor makes the selective adaptation of different visual motion processing steps possible. To date, however, only a few studies tested the effects of adaptor duration on the electrophysiological correlates of human face processing directly. The goal of the current study was to test the effect of adaptor duration on the image-, identity-, and generic category-specific face processing steps. To this end, in a two-alternative forced choice familiarity decision task we used five adaptor durations (ranging from 200–5000 ms) and four adaptor categories: adaptor and test were identical images—Repetition Suppression (RS); adaptor and test were different images of the Same Identity (SameID); adaptor and test images depicted Different Identities (DiffID); the adaptor was a Fourier phase-randomized image (No). Behaviorally, a strong priming effect was observed in both accuracy and response times for RS compared with both DiffID and No. The electrophysiological results suggest that rapid adaptation leads to a category-specific modulation of P100, N170, and N250. In addition, both identity and image-specific processes affected the N250 component during rapid adaptation. On the other hand, prolonged (5000 ms) adaptation enhanced, and extended category-specific adaptation processes over all tested ERP components. Additionally, prolonged adaptation led to the emergence of image-, and identity-specific modulations on the N170 and P2 components as well. In other words, there was a clear dissociation among category, identity-, and image

  18. Adaptive multispectral image processing for the detection of targets in terrain clutter

    NASA Astrophysics Data System (ADS)

    Hoff, Lawrence E.; Zeidler, James R.; Yerkes, Christopher R.

    1992-08-01

    In passive detection of small infrared targets in image data, we are faced with the difficult task of enhancing some characteristic of the target or signal while suppressing the clutter or background image noise. We reported that an effective means by which targets may be identified is to exploit characteristics which exist between scenes measured in different bands in the long wave infrared region of the electromagnetic spectrum. These methods are broadly termed multispectral techniques. In this paper we present a method by which a two- dimensional least-mean square adaptive filter is used to distinguish between target and clutter using multispectral techniques.

  19. Adaptive optics microscopy enhances image quality in deep layers of CLARITY processed brains of YFP-H mice

    NASA Astrophysics Data System (ADS)

    Reinig, Marc R.; Novack, Samuel W.; Tao, Xiaodong; Ermini, Florian; Bentolila, Laurent A.; Roberts, Dustin G.; MacKenzie-Graham, Allan; Godshalk, S. E.; Raven, M. A.; Kubby, Joel

    2016-03-01

    Optical sectioning of biological tissues has become the method of choice for three-dimensional histological analyses. This is particularly important in the brain were neurons can extend processes over large distances and often whole brain tracing of neuronal processes is desirable. To allow deeper optical penetration, which in fixed tissue is limited by scattering and refractive index mismatching, tissue-clearing procedures such as CLARITY have been developed. CLARITY processed brains have a nearly uniform refractive index and three-dimensional reconstructions at cellular resolution have been published. However, when imaging in deep layers at submicron resolution some limitations caused by residual refractive index mismatching become apparent, as the resulting wavefront aberrations distort the microscopic image. The wavefront can be corrected with adaptive optics. Here, we investigate the wavefront aberrations at different depths in CLARITY processed mouse brains and demonstrate the potential of adaptive optics to enable higher resolution and a better signal-to-noise ratio. Our adaptive optics system achieves high-speed measurement and correction of the wavefront with an open-loop control using a wave front sensor and a deformable mirror. Using adaptive optics enhanced microscopy, we demonstrate improved image quality wavefront, point spread function, and signal to noise in the cortex of YFP-H mice.

  20. Adaptive Sensor Optimization and Cognitive Image Processing Using Autonomous Optical Neuroprocessors

    SciTech Connect

    CAMERON, STEWART M.

    2001-10-01

    Measurement and signal intelligence demands has created new requirements for information management and interoperability as they affect surveillance and situational awareness. Integration of on-board autonomous learning and adaptive control structures within a remote sensing platform architecture would substantially improve the utility of intelligence collection by facilitating real-time optimization of measurement parameters for variable field conditions. A problem faced by conventional digital implementations of intelligent systems is the conflict between a distributed parallel structure on a sequential serial interface functionally degrading bandwidth and response time. In contrast, optically designed networks exhibit the massive parallelism and interconnect density needed to perform complex cognitive functions within a dynamic asynchronous environment. Recently, all-optical self-organizing neural networks exhibiting emergent collective behavior which mimic perception, recognition, association, and contemplative learning have been realized using photorefractive holography in combination with sensory systems for feature maps, threshold decomposition, image enhancement, and nonlinear matched filters. Such hybrid information processors depart from the classical computational paradigm based on analytic rules-based algorithms and instead utilize unsupervised generalization and perceptron-like exploratory or improvisational behaviors to evolve toward optimized solutions. These systems are robust to instrumental systematics or corrupting noise and can enrich knowledge structures by allowing competition between multiple hypotheses. This property enables them to rapidly adapt or self-compensate for dynamic or imprecise conditions which would be unstable using conventional linear control models. By incorporating an intelligent optical neuroprocessor in the back plane of an imaging sensor, a broad class of high-level cognitive image analysis problems including geometric

  1. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  2. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  3. Passive adaptive imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Tofsted, David

    2016-05-01

    Standard methods for improved imaging system performance under degrading optical turbulence conditions typically involve active adaptive techniques or post-capture image processing. Here, passive adaptive methods are considered where active sources are disallowed, a priori. Theoretical analyses of short-exposure turbulence impacts indicate that varying aperture sizes experience different degrees of turbulence impacts. Smaller apertures often outperform larger aperture systems as turbulence strength increases. This suggests a controllable aperture system is advantageous. In addition, sub-aperture sampling of a set of training images permits the system to sense tilts in different sub-aperture regions through image acquisition and image cross-correlation calculations. A four sub-aperture pattern supports corrections involving five realizable operating modes (beyond tip and tilt) for removing aberrations over an annular pattern. Progress to date will be discussed regarding development and field trials of a prototype system.

  4. Cameron - Optimized Compilation of Visual Programs for Image Processing on Adaptive Computing Systems (ACS)

    DTIC Science & Technology

    2002-01-01

    the Cameron project. The goal of the Cameron project is to make FPGAs and other adaptive computer systems available to more applications programmers...loops onto an FPGA , but this is invisible. SA-C therefore makes recon- gurable processors accessible to applications programmers with no hardware...happens that for SA-C programs, the host executable off-loads the processing of loops onto an FPGA , but this is invisible. SA-C therefore makes

  5. Multilevel adaptive process control of acquisition and post-processing of computed radiographic images in picture archiving and communication system environment.

    PubMed

    Zhang, J; Huang, H K

    1998-01-01

    Computed radiography (CR) has become a widely used imaging modality replacing the conventional screen/film procedure in diagnostic radiology. After a latent image is captured in a CR imaging plate, there are seven key processes required before a CR image can be reliably archived and displayed in a picture archiving and communication system (PACS) environment. Human error, computational bottlenecks, software bugs, and CR system errors often crash the CR acquisition and post-processing computers which results in a delay of transmitting CR images for proper viewing at the workstation. In this paper, we present a control theory and a fault tolerance algorithm, as well as their implementation in the PACS environment to circumvent such problems. The software implementation of the control theory and the algorithm is based on the event-driven, multilevel adaptive processing structure. The automated software has been used to provide real-time monitoring and control of CR image acquisition and post-processing in the intensive care unit module of the PACS operation at the University of California, San Francisco. Results demonstrate that the multilevel adaptive process control structure improves CR post-processing time, increases the reliability of the CR images delivery, minimizes user intervention, and speeds up the previously time-consuming quality assurance procedure.

  6. NASA End-to-End Data System /NEEDS/ information adaptive system - Performing image processing onboard the spacecraft

    NASA Technical Reports Server (NTRS)

    Kelly, W. L.; Howle, W. M.; Meredith, B. D.

    1980-01-01

    The Information Adaptive System (IAS) is an element of the NASA End-to-End Data System (NEEDS) Phase II and is focused toward onbaord image processing. Since the IAS is a data preprocessing system which is closely coupled to the sensor system, it serves as a first step in providing a 'Smart' imaging sensor. Some of the functions planned for the IAS include sensor response nonuniformity correction, geometric correction, data set selection, data formatting, packetization, and adaptive system control. The inclusion of these sensor data preprocessing functions onboard the spacecraft will significantly improve the extraction of information from the sensor data in a timely and cost effective manner and provide the opportunity to design sensor systems which can be reconfigured in near real time for optimum performance. The purpose of this paper is to present the preliminary design of the IAS and the plans for its development.

  7. Imaging an Adapted Dentoalveolar Complex

    PubMed Central

    Herber, Ralf-Peter; Fong, Justine; Lucas, Seth A.; Ho, Sunita P.

    2012-01-01

    Adaptation of a rat dentoalveolar complex was illustrated using various imaging modalities. Micro-X-ray computed tomography for 3D modeling, combined with complementary techniques, including image processing, scanning electron microscopy, fluorochrome labeling, conventional histology (H&E, TRAP), and immunohistochemistry (RANKL, OPN) elucidated the dynamic nature of bone, the periodontal ligament-space, and cementum in the rat periodontium. Tomography and electron microscopy illustrated structural adaptation of calcified tissues at a higher resolution. Ongoing biomineralization was analyzed using fluorochrome labeling, and by evaluating attenuation profiles using virtual sections from 3D tomographies. Osteoclastic distribution as a function of anatomical location was illustrated by combining histology, immunohistochemistry, and tomography. While tomography and SEM provided past resorption-related events, future adaptive changes were deduced by identifying matrix biomolecules using immunohistochemistry. Thus, a dynamic picture of the dentoalveolar complex in rats was illustrated. PMID:22567314

  8. Modeling of imaging fiber bundles and adapted signal processing for fringe projection

    NASA Astrophysics Data System (ADS)

    Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard

    2016-12-01

    Fringe projection profilometry is an established technique for capturing three-dimensional (3-D)-geometry data with high-point densities in short time. By combining fringe projection with endoscopy techniques, it is possible to perform inline inspection of industrial manufacturing processes. A new fringe projection system is presented, which uses flexible image fiber bundles to achieve versatile positioning of a compact sensor head. When measuring specimens with highly varying reflectivity, such as technical surfaces on tool geometries, measurement errors increase especially due to the crosstalk between individual fibers in the bundle. A detailed analysis of the transmission properties of the utilized fiber bundles is presented. It is shown that aliasing is avoided due to the non-regular grid structure of a bundle. Different techniques are demonstrated to reduce the effect of crosstalk on the phase evaluation. Measurements of highly reflective technical surfaces with different geometrical properties are shown.

  9. MITRE Adaptive Processing Capability

    DTIC Science & Technology

    1994-06-01

    gathering, Funded Research and Development transfer, processing , and interpretation of Center (FFRDC) under the primary data are provided. A strong state-of...1988: Unisys Reston Technology Center, Reston, VA Dr. Bronez was a Member of the Technical Staff. He performed research on signal processing and... processing , mathematical research , and sensor array processing . He was Project Leader and Principal Investigator for projects in adaptive beamforming

  10. Image Processing

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Images are prepared from data acquired by the multispectral scanner aboard Landsat, which views Earth in four ranges of the electromagnetic spectrum, two visible bands and two infrared. Scanner picks up radiation from ground objects and converts the radiation signatures to digital signals, which are relayed to Earth and recorded on tape. Each tape contains "pixels" or picture elements covering a ground area; computerized equipment processes the tapes and plots each pixel, line be line to produce the basic image. Image can be further processed to correct sensor errors, to heighten contrast for feature emphasis or to enhance the end product in other ways. Key factor in conversion of digital data to visual form is precision of processing equipment. Jet Propulsion Laboratory prepared a digital mosaic that was plotted and enhanced by Optronics International, Inc. by use of the company's C-4300 Colorwrite, a high precision, high speed system which manipulates and analyzes digital data and presents it in visual form on film. Optronics manufactures a complete family of image enhancement processing systems to meet all users' needs. Enhanced imagery is useful to geologists, hydrologists, land use planners, agricultural specialists geographers and others.

  11. Adaptive wiener image restoration kernel

    SciTech Connect

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  12. Experimental adaptive process tomography

    NASA Astrophysics Data System (ADS)

    Pogorelov, I. A.; Struchalin, G. I.; Straupe, S. S.; Radchenko, I. V.; Kravtsov, K. S.; Kulik, S. P.

    2017-01-01

    Adaptive measurements were recently shown to significantly improve the performance of quantum state tomography. Utilizing information about the system for the online choice of optimal measurements allows one to reach the ultimate bounds of precision for state reconstruction. In this article we generalize an adaptive Bayesian approach to the case of process tomography and experimentally show its superiority in the task of learning unknown quantum operations. Our experiments with photonic polarization qubits cover all types of single-qubit channels. We also discuss instrumental errors and the criteria for evaluation of the ultimate achievable precision in an experiment. It turns out that adaptive tomography provides a lower noise floor in the presence of strong technical noise.

  13. Adaptive Signal Processing Testbed

    NASA Astrophysics Data System (ADS)

    Parliament, Hugh A.

    1991-09-01

    The design and implementation of a system for the acquisition, processing, and analysis of signal data is described. The initial application for the system is the development and analysis of algorithms for excision of interfering tones from direct sequence spread spectrum communication systems. The system is called the Adaptive Signal Processing Testbed (ASPT) and is an integrated hardware and software system built around the TMS320C30 chip. The hardware consists of a radio frequency data source, digital receiver, and an adaptive signal processor implemented on a Sun workstation. The software components of the ASPT consists of a number of packages including the Sun driver package; UNIX programs that support software development on the TMS320C30 boards; UNIX programs that provide the control, user interaction, and display capabilities for the data acquisition, processing, and analysis components of the ASPT; and programs that perform the ASPT functions including data acquisition, despreading, and adaptive filtering. The performance of the ASPT system is evaluated by comparing actual data rates against their desired values. A number of system limitations are identified and recommendations are made for improvements.

  14. Adaptive Iterative Dose Reduction Using Three Dimensional Processing (AIDR3D) Improves Chest CT Image Quality and Reduces Radiation Exposure

    PubMed Central

    Yamashiro, Tsuneo; Miyara, Tetsuhiro; Honda, Osamu; Kamiya, Hisashi; Murata, Kiyoshi; Ohno, Yoshiharu; Tomiyama, Noriyuki; Moriya, Hiroshi; Koyama, Mitsuhiro; Noma, Satoshi; Kamiya, Ayano; Tanaka, Yuko; Murayama, Sadayuki

    2014-01-01

    Objective To assess the advantages of Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D) for image quality improvement and dose reduction for chest computed tomography (CT). Methods Institutional Review Boards approved this study and informed consent was obtained. Eighty-eight subjects underwent chest CT at five institutions using identical scanners and protocols. During a single visit, each subject was scanned using different tube currents: 240, 120, and 60 mA. Scan data were converted to images using AIDR3D and a conventional reconstruction mode (without AIDR3D). Using a 5-point scale from 1 (non-diagnostic) to 5 (excellent), three blinded observers independently evaluated image quality for three lung zones, four patterns of lung disease (nodule/mass, emphysema, bronchiolitis, and diffuse lung disease), and three mediastinal measurements (small structure visibility, streak artifacts, and shoulder artifacts). Differences in these scores were assessed by Scheffe's test. Results At each tube current, scans using AIDR3D had higher scores than those without AIDR3D, which were significant for lung zones (p<0.0001) and all mediastinal measurements (p<0.01). For lung diseases, significant improvements with AIDR3D were frequently observed at 120 and 60 mA. Scans with AIDR3D at 120 mA had significantly higher scores than those without AIDR3D at 240 mA for lung zones and mediastinal streak artifacts (p<0.0001), and slightly higher or equal scores for all other measurements. Scans with AIDR3D at 60 mA were also judged superior or equivalent to those without AIDR3D at 120 mA. Conclusion For chest CT, AIDR3D provides better image quality and can reduce radiation exposure by 50%. PMID:25153797

  15. Retinal Imaging: Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Goncharov, A. S.; Iroshnikov, N. G.; Larichev, Andrey V.

    This chapter describes several factors influencing the performance of ophthalmic diagnostic systems with adaptive optics compensation of human eye aberration. Particular attention is paid to speckle modulation, temporal behavior of aberrations, and anisoplanatic effects. The implementation of a fundus camera with adaptive optics is considered.

  16. Processing Of Binary Images

    NASA Astrophysics Data System (ADS)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  17. ALISA: adaptive learning image and signal analysis

    NASA Astrophysics Data System (ADS)

    Bock, Peter

    1999-01-01

    ALISA (Adaptive Learning Image and Signal Analysis) is an adaptive statistical learning engine that may be used to detect and classify the surfaces and boundaries of objects in images. The engine has been designed, implemented, and tested at both the George Washington University and the Research Institute for Applied Knowledge Processing in Ulm, Germany over the last nine years with major funding from Robert Bosch GmbH and Lockheed-Martin Corporation. The design of ALISA was inspired by the multi-path cortical- column architecture and adaptive functions of the mammalian visual cortex.

  18. Adaptive discrete cosine transform based image coding

    NASA Astrophysics Data System (ADS)

    Hu, Neng-Chung; Luoh, Shyan-Wen

    1996-04-01

    In this discrete cosine transform (DCT) based image coding, the DCT kernel matrix is decomposed into a product of two matrices. The first matrix is called the discrete cosine preprocessing transform (DCPT), whose kernels are plus or minus 1 or plus or minus one- half. The second matrix is the postprocessing stage treated as a correction stage that converts the DCPT to the DCT. On applying the DCPT to image coding, image blocks are processed by the DCPT, then a decision is made to determine whether the processed image blocks are inactive or active in the DCPT domain. If the processed image blocks are inactive, then the compactness of the processed image blocks is the same as that of the image blocks processed by the DCT. However, if the processed image blocks are active, a correction process is required; this is achieved by multiplying the processed image block by the postprocessing stage. As a result, this adaptive image coding achieves the same performance as the DCT image coding, and both the overall computation and the round-off error are reduced, because both the DCPT and the postprocessing stage can be implemented by distributed arithmetic or fast computation algorithms.

  19. Digital image processing.

    PubMed

    Seeram, Euclid

    2004-01-01

    Digital image processing is now commonplace in radiology, nuclear medicine and sonography. This article outlines underlying principles and concepts of digital image processing. After completing this article, readers should be able to: List the limitations of film-based imaging. Identify major components of a digital imaging system. Describe the history and application areas of digital image processing. Discuss image representation and the fundamentals of digital image processing. Outline digital image processing techniques and processing operations used in selected imaging modalities. Explain the basic concepts and visualization tools used in 3-D and virtual reality imaging. Recognize medical imaging informatics as a new area of specialization for radiologic technologists.

  20. Adaptive processing for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Crane, R. B.; Reyer, J. F.

    1975-01-01

    Analytical and test results on the use of adaptive processing on LANDSAT data are presented. The Kalman filter was used as a framework to contain different adapting techniques. When LANDSAT MSS data were used all of the modifications made to the Kalman filter performed the functions for which they were designed. It was found that adaptive processing could provide compensation for incorrect signature means, within limits. However, if the data were such that poor classification accuracy would be obtained when the correct means were used, then adaptive processing would not improve the accuracy and might well lower it even further.

  1. Color image diffusion using adaptive bilateral filter.

    PubMed

    Xie, Jun; Ann Heng, Pheng

    2005-01-01

    In this paper, we propose an approach to diffuse color images based on the bilateral filter. Real image data has a level of uncertainty that is manifested in the variability of measures assigned to pixels. This uncertainty is usually interpreted as noise and considered an undesirable component of the image data. Image diffusion can smooth away small-scale structures and noise while retaining important features, thus improving the performances for many image processing algorithms such as image compression, segmentation and recognition. The bilateral filter is noniterative, simple and fast. It has been shown to give similar and possibly better filtering results than iterative approaches. However, the performance of this filter is greatly affected by the choose of the parameters of filtering kernels. In order to remove noise and maintain the significant features on images, we extend the bilateral filter by introducing an adaptive domain spread into the nonlinear diffusion scheme. For color images, we employ the CIE-Lab color system to describe input images and the filtering process is operated using three channels together. Our analysis shows that the proposed method is more suitable for preserving strong edges on noisy images than the original bilateral filter. Empirical results on both nature images and color medical images confirm the novel method's advantages, and show it can diffuse various kinds of color images correctly and efficiently.

  2. Image processing in astronomy

    NASA Astrophysics Data System (ADS)

    Berry, Richard

    1994-04-01

    Today's personal computers are more powerful than the mainframes that processed images during the early days of space exploration. We have entered an age in which anyone can do image processing. Topics covering the following aspects of image processing are discussed: digital-imaging basics, image calibration, image analysis, scaling, spatial enhancements, and compositing.

  3. Filter for biomedical imaging and image processing.

    PubMed

    Mondal, Partha P; Rajan, K; Ahmad, Imteyaz

    2006-07-01

    Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

  4. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  5. Adaptive marginal median filter for colour images.

    PubMed

    Morillas, Samuel; Gregori, Valentín; Sapena, Almanzor

    2011-01-01

    This paper describes a new filter for impulse noise reduction in colour images which is aimed at improving the noise reduction capability of the classical vector median filter. The filter is inspired by the application of a vector marginal median filtering process over a selected group of pixels in each filtering window. This selection, which is based on the vector median, along with the application of the marginal median operation constitutes an adaptive process that leads to a more robust filter design. Also, the proposed method is able to process colour images without introducing colour artifacts. Experimental results show that the images filtered with the proposed method contain less noisy pixels than those obtained through the vector median filter.

  6. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    SciTech Connect

    Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-08-15

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then

  7. Multi-scale Adaptive Computational Ghost Imaging

    PubMed Central

    Sun, Shuai; Liu, Wei-Tao; Lin, Hui-Zu; Zhang, Er-Feng; Liu, Ji-Ying; Li, Quan; Chen, Ping-Xing

    2016-01-01

    In some cases of imaging, wide spatial range and high spatial resolution are both required, which requests high performance of detection devices and huge resource consumption for data processing. We propose and demonstrate a multi-scale adaptive imaging method based on the idea of computational ghost imaging, which can obtain a rough outline of the whole scene with a wide range then accordingly find out the interested parts and achieve high-resolution details of those parts, by controlling the field of view and the transverse coherence width of the pseudo-thermal field illuminated on the scene with a spatial light modulator. Compared to typical ghost imaging, the resource consumption can be dramatically reduced using our scheme. PMID:27841339

  8. Multi-scale Adaptive Computational Ghost Imaging

    NASA Astrophysics Data System (ADS)

    Sun, Shuai; Liu, Wei-Tao; Lin, Hui-Zu; Zhang, Er-Feng; Liu, Ji-Ying; Li, Quan; Chen, Ping-Xing

    2016-11-01

    In some cases of imaging, wide spatial range and high spatial resolution are both required, which requests high performance of detection devices and huge resource consumption for data processing. We propose and demonstrate a multi-scale adaptive imaging method based on the idea of computational ghost imaging, which can obtain a rough outline of the whole scene with a wide range then accordingly find out the interested parts and achieve high-resolution details of those parts, by controlling the field of view and the transverse coherence width of the pseudo-thermal field illuminated on the scene with a spatial light modulator. Compared to typical ghost imaging, the resource consumption can be dramatically reduced using our scheme.

  9. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  10. Efficient adaptive thresholding with image masks

    NASA Astrophysics Data System (ADS)

    Oh, Young-Taek; Hwang, Youngkyoo; Kim, Jung-Bae; Bang, Won-Chul

    2014-03-01

    Adaptive thresholding is a useful technique for document analysis. In medical image processing, it is also helpful for segmenting structures, such as diaphragms or blood vessels. This technique sets a threshold using local information around a pixel, then binarizes the pixel according to the value. Although this technique is robust to changes in illumination, it takes a significant amount of time to compute thresholds because it requires adding all of the neighboring pixels. Integral images can alleviate this overhead; however, medical images, such as ultrasound, often come with image masks, and ordinary algorithms often cause artifacts. The main problem is that the shape of the summing area is not rectangular near the boundaries of the image mask. For example, the threshold at the boundary of the mask is incorrect because pixels on the mask image are also counted. Our key idea to cope with this problem is computing the integral image for the image mask to count the valid number of pixels. Our method is implemented on a GPU using CUDA, and experimental results show that our algorithm is 164 times faster than a naïve CPU algorithm for averaging.

  11. An adaptive filter for smoothing noisy radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Stiles, J. A.; Shanmugam, K. S.; Holtzman, J. C.; Smith, S. A.

    1981-01-01

    A spatial domain adaptive Wiener filter for smoothing radar images corrupted by multiplicative noise is presented. The filter is optimum in a minimum mean squared error sense, computationally efficient, and preserves edges in the image better than other filters. The proposed algorithm can also be used for processing optical images with illumination variations that have a multiplicative effect.

  12. Hyperspectral image processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  13. Biomedical image processing.

    PubMed

    Huang, H K

    1981-01-01

    Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed. Two current federally funded research projects in heart imaging, the dynamic spatial reconstructor and the dynamic cardiac three-dimensional densitometer, should bring some fruitful results in the near future. Miscrosopic imaging technique is very different from the radiological imaging technique in the sense that

  14. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  15. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  16. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  17. Adaptive fingerprint image enhancement with emphasis on preprocessing of data.

    PubMed

    Bartůnek, Josef Ström; Nilsson, Mikael; Sällberg, Benny; Claesson, Ingvar

    2013-02-01

    This article proposes several improvements to an adaptive fingerprint enhancement method that is based on contextual filtering. The term adaptive implies that parameters of the method are automatically adjusted based on the input fingerprint image. Five processing blocks comprise the adaptive fingerprint enhancement method, where four of these blocks are updated in our proposed system. Hence, the proposed overall system is novel. The four updated processing blocks are: 1) preprocessing; 2) global analysis; 3) local analysis; and 4) matched filtering. In the preprocessing and local analysis blocks, a nonlinear dynamic range adjustment method is used. In the global analysis and matched filtering blocks, different forms of order statistical filters are applied. These processing blocks yield an improved and new adaptive fingerprint image processing method. The performance of the updated processing blocks is presented in the evaluation part of this paper. The algorithm is evaluated toward the NIST developed NBIS software for fingerprint recognition on FVC databases.

  18. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  19. Adaptation Aftereffects in the Perception of Radiological Images

    PubMed Central

    Kompaniez, Elysse; Abbey, Craig K.; Boone, John M.; Webster, Michael A.

    2013-01-01

    Radiologists must classify and interpret medical images on the basis of visual inspection. We examined how the perception of radiological scans might be affected by common processes of adaptation in the visual system. Adaptation selectively adjusts sensitivity to the properties of the stimulus in current view, inducing an aftereffect in the appearance of stimuli viewed subsequently. These perceptual changes have been found to affect many visual attributes, but whether they are relevant to medical image perception is not well understood. To examine this we tested whether aftereffects could be generated by the characteristic spatial structure of radiological scans, and whether this could bias their appearance along dimensions that are routinely used to classify them. Measurements were focused on the effects of adaptation to images of normal mammograms, and were tested in observers who were not radiologists. Tissue density in mammograms is evaluated visually and ranges from "dense" to "fatty." Arrays of images varying in intermediate levels between these categories were created by blending dense and fatty images with different weights. Observers first adapted by viewing image samples of dense or fatty tissue, and then judged the appearance of the intermediate images by using a texture matching task. This revealed pronounced perceptual aftereffects – prior exposure to dense images caused an intermediate image to appear more fatty and vice versa. Moreover, the appearance of the adapting images themselves changed with prolonged viewing, so that they became less distinctive as textures. These aftereffects could not be accounted for by the contrast differences or power spectra of the images, and instead tended to follow from the phase spectrum. Our results suggest that observers can selectively adapt to the properties of radiological images, and that this selectivity could strongly impact the perceived textural characteristics of the images. PMID:24146833

  20. Image Processing Software

    NASA Astrophysics Data System (ADS)

    Bosio, M. A.

    1990-11-01

    ABSTRACT: A brief description of astronomical image software is presented. This software was developed in a Digital Micro Vax II Computer System. : St presenta una somera descripci6n del software para procesamiento de imagenes. Este software fue desarrollado en un equipo Digital Micro Vax II. : DATA ANALYSIS - IMAGE PROCESSING

  1. Adapted polarization state contrast image.

    PubMed

    Richert, Michael; Orlik, Xavier; De Martino, Antonello

    2009-08-03

    We propose a general method to maximize the polarimetric contrast between an object and its background using a predetermined illumination polarization state. After a first estimation of the polarimetric properties of the scene by classical Mueller imaging, we evaluate the incident polarized field that induces scattered polarization states by the object and background, as opposite as possible on the Poincar e sphere. With a detection method optimized for a 2-channel imaging system, Monte Carlo simulations of low flux coherent imaging are performed with various objects and backgrounds having different properties of retardance, dichroism and depolarization. With respect to classical Mueller imaging, possibly associated to the polar decomposition, our results show a noticeable increase in the Bhattacharyya distance used as our contrast parameter.

  2. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  3. Extreme Adaptive Optics Planet Imager

    NASA Astrophysics Data System (ADS)

    Macintosh, B.; Graham, J. R.; Ghez, A.; Kalas, P.; Lloyd, J.; Makidon, R.; Olivier, S.; Patience, J.; Perrin, M.; Poyneer, L.; Severson, S.; Sheinis, A.; Sivaramakrishnan, A.; Troy, M.; Wallace, J.; Wilhelmsen, J.

    2002-12-01

    Direct detection of photons emitted or reflected by extrasolar planets is the next major step in extrasolar planet studies. Current adaptive optics (AO) systems, with <300 subapertures and Strehl ratio 0.4-0.7, can achieve contrast levels of 106 at 2" separations; this is sufficient to see very young planets in wide orbits but insufficient to detect solar systems more like our own. Contrast levels of 107 - 108 in the near-IR are needed to probe a significant part of the extrasolar planet phase space. The NSF Center for Adaptive Optics is carrying out a design study for a dedicated ultra-high-contrast "Extreme" adaptive optics system for an 8-10m telescope. With 3000 controlled subapertures it should achieve Strehl ratios > 0.9 in the near-IR. Using a spatially filtered wavefront sensor, the system will be optimized to control scattered light over a large radius and suppress artifacts caused static errors. We predict that it will achieve contrast levels of 107-108 around a large sample of stars (R<7-10), sufficient to detect Jupiter-like planets through their near-IR emission over a wide range of ages and masses. The system will be capable of a variety of high-contrast science including studying circumstellar dust disks at densities a factor of 10-100 lower than currently feasible and a systematic inventory of other solar systems on 10-100 AU scale. This work was supported by the NSF Science and Technology Center for Adaptive Optics, managed by UC Santa Cruz under AST-9876783. Portions of this work was performed under the auspices of the U.S. Department of Energy, under contract No. W-7405-Eng-48.

  4. Adaptive optics and phase diversity imaging for responsive space applications.

    SciTech Connect

    Smith, Mark William; Wick, David Victor

    2004-11-01

    The combination of phase diversity and adaptive optics offers great flexibility. Phase diverse images can be used to diagnose aberrations and then provide feedback control to the optics to correct the aberrations. Alternatively, phase diversity can be used to partially compensate for aberrations during post-detection image processing. The adaptive optic can produce simple defocus or more complex types of phase diversity. This report presents an analysis, based on numerical simulations, of the efficiency of different modes of phase diversity with respect to compensating for specific aberrations during post-processing. It also comments on the efficiency of post-processing versus direct aberration correction. The construction of a bench top optical system that uses a membrane mirror as an active optic is described. The results of characterization tests performed on the bench top optical system are presented. The work described in this report was conducted to explore the use of adaptive optics and phase diversity imaging for responsive space applications.

  5. Image-Specific Prior Adaptation for Denoising.

    PubMed

    Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z

    2015-12-01

    Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity.

  6. Adaptive optics imaging of the retina

    PubMed Central

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified. PMID:24492503

  7. Adaptive optics imaging of the retina.

    PubMed

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified.

  8. Image processing occupancy sensor

    DOEpatents

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  9. Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS)

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.

    2006-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) project is creating a record of forest disturbance and regrowth for North America from the Landsat satellite record, in support of the carbon modeling activities. LEDAPS relies on the decadal Landsat GeoCover data set supplemented by dense image time series for selected locations. Imagery is first atmospherically corrected to surface reflectance, and then change detection algorithms are used to extract disturbance area, type, and frequency. Reuse of the MODIS Land processing system (MODAPS) architecture allows rapid throughput of over 2200 MSS, TM, and ETM+ scenes. Initial ("Beta") surface reflectance products are currently available for testing, and initial continental disturbance products will be available by the middle of 2006.

  10. Quantum image processing?

    NASA Astrophysics Data System (ADS)

    Mastriani, Mario

    2017-01-01

    This paper presents a number of problems concerning the practical (real) implementation of the techniques known as quantum image processing. The most serious problem is the recovery of the outcomes after the quantum measurement, which will be demonstrated in this work that is equivalent to a noise measurement, and it is not considered in the literature on the subject. It is noteworthy that this is due to several factors: (1) a classical algorithm that uses Dirac's notation and then it is coded in MATLAB does not constitute a quantum algorithm, (2) the literature emphasizes the internal representation of the image but says nothing about the classical-to-quantum and quantum-to-classical interfaces and how these are affected by decoherence, (3) the literature does not mention how to implement in a practical way (at the laboratory) these proposals internal representations, (4) given that quantum image processing works with generic qubits, this requires measurements in all axes of the Bloch sphere, logically, and (5) among others. In return, the technique known as quantum Boolean image processing is mentioned, which works with computational basis states (CBS), exclusively. This methodology allows us to avoid the problem of quantum measurement, which alters the results of the measured except in the case of CBS. Said so far is extended to quantum algorithms outside image processing too.

  11. Local image registration by adaptive filtering.

    PubMed

    Caner, Gulcin; Tekalp, A Murat; Sharma, Gaurav; Heinzelman, Wendi

    2006-10-01

    We propose a new adaptive filtering framework for local image registration, which compensates for the effect of local distortions/displacements without explicitly estimating a distortion/displacement field. To this effect, we formulate local image registration as a two-dimensional (2-D) system identification problem with spatially varying system parameters. We utilize a 2-D adaptive filtering framework to identify the locally varying system parameters, where a new block adaptive filtering scheme is introduced. We discuss the conditions under which the adaptive filter coefficients conform to a local displacement vector at each pixel. Experimental results demonstrate that the proposed 2-D adaptive filtering framework is very successful in modeling and compensation of both local distortions, such as Stirmark attacks, and local motion, such as in the presence of a parallax field. In particular, we show that the proposed method can provide image registration to: a) enable reliable detection of watermarks following a Stirmark attack in nonblind detection scenarios, b) compensate for lens distortions, and c) align multiview images with nonparametric local motion.

  12. Acousto-Optic Adaptive Processing (AOAP).

    DTIC Science & Technology

    1983-12-01

    I ~.sls Phe Report December 1963 •- ACOUSTO - OPTIC ADAPTIVE <PROCESSING (AOAP) General Electric Company W. A. Penn, D. R. Morgan, A. Aridgides and M. L...numnber) Optical signal processing Acousto - optical modulators Adaptive signal processing - Adaptive sidelobe cancellation 20. ABSTRACT (Contnue an...required operations of multiplication and time delay are provided by acousto - optical (AO) delay lines. The required time integraticO is provided by

  13. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  14. Edge-preserving image compression using adaptive lifting wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Libao; Qiu, Bingchang

    2015-07-01

    In this paper, a novel 2-D adaptive lifting wavelet transform is presented. The proposed algorithm is designed to further reduce the high-frequency energy of wavelet transform, improve the image compression efficiency and preserve the edge or texture of original images more effectively. In this paper, a new optional direction set, covering the surrounding integer pixels and sub-pixels, is designed. Hence, our algorithm adapts far better to the image orientation features in local image blocks. To obtain the computationally efficient and coding performance, the complete processes of 2-D adaptive lifting wavelet transform is introduced and implemented. Compared with the traditional lifting-based wavelet transform, the adaptive directional lifting and the direction-adaptive discrete wavelet transform, the new structure reduces the high-frequency wavelet coefficients more effectively, and the texture structures of the reconstructed images are more refined and clear than that of the other methods. The peak signal-to-noise ratio and the subjective quality of the reconstructed images are significantly improved.

  15. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  16. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  17. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  18. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  19. Image Processing Research

    DTIC Science & Technology

    1975-09-30

    Technical Journal, Vol. 36, pp. 653-709, May 1957. -50- 4. Image Restoration anJ Enhdikcement Projects Imaje restoration ani image enhancement are...n (9K =--i_ (9) -sn =0- 2. where o is the noise energy ani I is an identity matrix. n Color Imaje Scanner Calibration: A common problem in the...line of the imaje , and >at. The statistics cf the process N(k) can now be given in terms of the statistics of m , 8 2 , and the sequence W= (cLe (5

  20. Image processing techniques for acoustic images

    NASA Astrophysics Data System (ADS)

    Murphy, Brian P.

    1991-06-01

    The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge detection and segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering.

  1. Adaptive MOEMS mirrors for medical imaging

    NASA Astrophysics Data System (ADS)

    Fayek, Reda; Ibrahim, Hany

    2007-03-01

    This paper presents micro-electro-mechanical-systems (MEMS) optical elements with high angular deflection arranged in arrays to perform dynamic laser beam focusing and scanning. Each element selectively addresses a portion of the laser beam. These devices are useful in medical and research applications including laser-scanning microscopy, confocal microscopes, and laser capture micro-dissection. Such laser-based imaging and diagnostic instruments involve complex laser beam manipulations. These often require compound lenses and mirrors that introduce misalignment, attenuation, distortion and light scatter. Instead of using expensive spherical and aspherical lenses and/or mirrors for sophisticated laser beam manipulations, we propose scalable adaptive micro-opto-electro-mechanical-systems (MOEMS) arrays to recapture optical performance and compensate for aberrations, distortions and imperfections introduced by inexpensive optics. A high-density array of small, individually addressable, MOEMS elements is similar to a Fresnel mirror. A scalable 2D array of micro-mirrors approximates spherical or arbitrary surface mirrors of different apertures. A proof of concept prototype was built using PolyMUMP TM due to its reliability, low cost and limited post processing requirements. Low-density arrays (2x2 arrays of square elements, 250x250μm each) were designed, fabricated, and tested. Electrostatic comb fingers actuate the edges of the square mirrors with a low actuation voltage of 20 V - 50 V. CoventorWare TM was used for the design, 3D modeling and motion simulations. Initial results are encouraging. The array is adaptive, configurable and scalable with low actuation voltage and a large tuning range. Individual element addressability would allow versatile uses. Future research will increase deflection angles and maximize reflective area.

  2. Adaptive fuzzy segmentation of magnetic resonance images.

    PubMed

    Pham, D L; Prince, J L

    1999-09-01

    An algorithm is presented for the fuzzy segmentation of two-dimensional (2-D) and three-dimensional (3-D) multispectral magnetic resonance (MR) images that have been corrupted by intensity inhomogeneities, also known as shading artifacts. The algorithm is an extension of the 2-D adaptive fuzzy C-means algorithm (2-D AFCM) presented in previous work by the authors. This algorithm models the intensity inhomogeneities as a gain field that causes image intensities to smoothly and slowly vary through the image space. It iteratively adapts to the intensity inhomogeneities and is completely automated. In this paper, we fully generalize 2-D AFCM to three-dimensional (3-D) multispectral images. Because of the potential size of 3-D image data, we also describe a new faster multigrid-based algorithm for its implementation. We show, using simulated MR data, that 3-D AFCM yields lower error rates than both the standard fuzzy C-means (FCM) algorithm and two other competing methods, when segmenting corrupted images. Its efficacy is further demonstrated using real 3-D scalar and multispectral MR brain images.

  3. Retinomorphic image processing.

    PubMed

    Ghosh, Kuntal; Bhaumik, Kamales; Sarkar, Sandip

    2008-01-01

    The present work is aimed at understanding and explaining some of the aspects of visual signal processing at the retinal level while exploiting the same towards the development of some simple techniques in the domain of digital image processing. Classical studies on retinal physiology revealed the nature of contrast sensitivity of the receptive field of bipolar or ganglion cells, which lie in the outer and inner plexiform layers of the retina. To explain these observations, a difference of Gaussian (DOG) filter was suggested, which was subsequently modified to a Laplacian of Gaussian (LOG) filter for computational ease in handling two-dimensional retinal inputs. Till date almost all image processing algorithms, used in various branches of science and engineering had followed LOG or one of its variants. Recent observations in retinal physiology however, indicate that the retinal ganglion cells receive input from a larger area than the classical receptive fields. We have proposed an isotropic model for the non-classical receptive field of the retinal ganglion cells, corroborated from these recent observations, by introducing higher order derivatives of Gaussian expressed as linear combination of Gaussians only. In digital image processing, this provides a new mechanism of edge detection on one hand and image half-toning on the other. It has also been found that living systems may sometimes prefer to "perceive" the external scenario by adding noise to the received signals in the pre-processing level for arriving at better information on light and shade in the edge map. The proposed model also provides explanation to many brightness-contrast illusions hitherto unexplained not only by the classical isotropic model but also by some other Gestalt and Constructivist models or by non-isotropic multi-scale models. The proposed model is easy to implement both in the analog and digital domain. A scheme for implementation in the analog domain generates a new silicon retina

  4. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  5. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  6. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  7. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.].

  8. An adaptive nonlocal means scheme for medical image denoising

    NASA Astrophysics Data System (ADS)

    Thaipanich, Tanaphol; Kuo, C.-C. Jay

    2010-03-01

    Medical images often consist of low-contrast objects corrupted by random noise arising in the image acquisition process. Thus, image denoising is one of the fundamental tasks required by medical imaging analysis. In this work, we investigate an adaptive denoising scheme based on the nonlocal (NL)-means algorithm for medical imaging applications. In contrast with the traditional NL-means algorithm, the proposed adaptive NL-means (ANL-means) denoising scheme has three unique features. First, it employs the singular value decomposition (SVD) method and the K-means clustering (K-means) technique for robust classification of blocks in noisy images. Second, the local window is adaptively adjusted to match the local property of a block. Finally, a rotated block matching algorithm is adopted for better similarity matching. Experimental results from both additive white Gaussian noise (AWGN) and Rician noise are given to demonstrate the superior performance of the proposed ANL denoising technique over various image denoising benchmarks in term of both PSNR and perceptual quality comparison.

  9. Image processing in planetology

    NASA Astrophysics Data System (ADS)

    Fulchignoni, M.; Picchiotti, A.

    The authors summarize the state of art in the field of planetary image processing in terms of available data, required procedures and possible improvements. More than a technical description of the adopted algorithms, that are considered as the normal background of any research activity dealing with interpretation of planetary data, the authors outline the advances in planetology achieved as a consequence of the availability of better data and more sophisticated hardware. An overview of the available data base and of the organizational efforts to make the data accessible and updated constitutes a valuable reference for those people interested in getting information. A short description of the processing sequence, illustrated by an example which shows the quality of the obtained products and the improvement in each successive step of the processing procedure gives an idea of the possible use of this kind of information.

  10. Optical Profilometers Using Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Hall, Gregory A.; Youngquist, Robert; Mikhael, Wasfy

    2006-01-01

    A method of adaptive signal processing has been proposed as the basis of a new generation of interferometric optical profilometers for measuring surfaces. The proposed profilometers would be portable, hand-held units. Sizes could be thus reduced because the adaptive-signal-processing method would make it possible to substitute lower-power coherent light sources (e.g., laser diodes) for white light sources and would eliminate the need for most of the optical components of current white-light profilometers. The adaptive-signal-processing method would make it possible to attain scanning ranges of the order of decimeters in the proposed profilometers.

  11. Adaptive image segmentation applied to plant reproduction by tissue culture

    NASA Astrophysics Data System (ADS)

    Vazquez Rueda, Martin G.; Hahn, Federico; Zapata, Jose L.

    1997-04-01

    This paper presents that experimental results obtained on indoor tissue culture using the adaptive image segmentation system. The performance of the adaptive technique is contrasted with different non-adaptive techniques commonly used in the computer vision field to demonstrate the improvement provided by the adaptive image segmentation system.

  12. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  13. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  14. Implementation of Multispectral Image Classification on a Remote Adaptive Computer

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna

    1999-01-01

    As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).

  15. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  16. Image processing and recognition for biological images.

    PubMed

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target.

  17. Adaptive Process Control in Rubber Industry.

    PubMed

    Brause, Rüdiger W; Pietruschka, Ulf

    1998-01-01

    This paper describes the problems and an adaptive solution for process control in rubber industry. We show that the human and economical benefits of an adaptive solution for the approximation of process parameters are very attractive. The modeling of the industrial problem is done by the means of artificial neural networks. For the example of the extrusion of a rubber profile in tire production our method shows good resuits even using only a few training samples.

  18. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  19. Curvature adaptive optics and low light imaging

    NASA Astrophysics Data System (ADS)

    Ftaclas, C.; Chun, M.; Kuhn, J.; Ritter, J.

    We review the basic approach of curvature adaptive optics (AO) and show how its many advantages arise. A curvature wave front sensor (WFS) measures exactly what a curvature deformable mirror (DM) generates. This leads to the computational and operational simplicity of a nearly diagonal control matrix. The DM automatically reconstructs the wave front based on WFS curvature measurements. Thus, there is no formal wave front reconstruction. This poses an interesting challenge to post-processing of AO images. Physical continuity of the DM and the reconstruction of phase from wave front curvature data assure that each actuated region of the DM corrects local phase, tip-tilt and focus. This gain in per-channel correction efficiency, combined with the need for only one pixel per channel detector reads in the WFS allows the use of photon counting detectors for wave front sensing. We note that the use of photon counting detectors implies penalty-free combination of correction channels either in the WFS or on the DM. This effectively decouples bright and faint source performance in that one no longer predicts the other. The application of curvature AO to the low light moving target detection problem, and explore the resulting challenges to components and control systems. Rapidly moving targets impose high-speed operation posing new requirements unique to curvature components. On the plus side, curvature wave front sensors, unlike their Shack-Hartmann counterparts, are tunable for optimum sensitivity to seeing and we are examining autonomous optimization of the WFS to respond to rapid changes in seeing.

  20. Local adaptive contrast enhancement for color images

    NASA Astrophysics Data System (ADS)

    Dijk, Judith; den Hollander, Richard J. M.; Schavemaker, John G. M.; Schutte, Klamer

    2007-04-01

    A camera or display usually has a smaller dynamic range than the human eye. For this reason, objects that can be detected by the naked eye may not be visible in recorded images. Lighting is here an important factor; improper local lighting impairs visibility of details or even entire objects. When a human is observing a scene with different kinds of lighting, such as shadows, he will need to see details in both the dark and light parts of the scene. For grey value images such as IR imagery, algorithms have been developed in which the local contrast of the image is enhanced using local adaptive techniques. In this paper, we present how such algorithms can be adapted so that details in color images are enhanced while color information is retained. We propose to apply the contrast enhancement on color images by applying a grey value contrast enhancement algorithm to the luminance channel of the color signal. The color coordinates of the signal will remain the same. Care is taken that the saturation change is not too high. Gamut mapping is performed so that the output can be displayed on a monitor. The proposed technique can for instance be used by operators monitoring movements of people in order to detect suspicious behavior. To do this effectively, specific individuals should both be easy to recognize and track. This requires optimal local contrast, and is sometimes much helped by color when tracking a person with colored clothes. In such applications, enhanced local contrast in color images leads to more effective monitoring.

  1. Pattern matching and adaptive image segmentation applied to plant reproduction by tissue culture

    NASA Astrophysics Data System (ADS)

    Vazquez Rueda, Martin G.; Hahn, Federico

    1999-03-01

    This paper shows the results obtained in a system vision applied to plant reproduction by tissue culture using adaptive image segmentation and pattern matching algorithms, this analysis improves the number of tissue obtained and minimize errors, the image features of tissue are considered join to statistical analysis to determine the best match and results. Tests make on potato plants are used to present comparative results with original images processed with adaptive segmentation algorithm and non adaptive algorithms and pattern matching.

  2. Adaptive dispersion compensation for guided wave imaging

    NASA Astrophysics Data System (ADS)

    Hall, James S.; Michaels, Jennifer E.

    2012-05-01

    Ultrasonic guided waves offer the promise of fast and reliable methods for interrogating large, plate-like structures. Distributed arrays of permanently attached, inexpensive piezoelectric transducers have thus been proposed as a cost-effective means to excite and measure ultrasonic guided waves for structural health monitoring (SHM) applications. Guided wave data recorded from a distributed array of transducers are often analyzed and interpreted through the use of guided wave imaging algorithms, such as conventional delay-and-sum imaging or the more recently applied minimum variance imaging. Both imaging algorithms perform reasonably well using signal envelopes, but can exhibit significant performance improvements when phase information is used. However, the use of phase information inherently requires knowledge of the dispersion relations, which are often not known to a sufficient degree of accuracy for high quality imaging since they are very sensitive to environmental conditions such as temperature, pressure, and loading. This work seeks to perform improved imaging with phase information by leveraging adaptive dispersion estimates obtained from in situ measurements. Experimentally obtained data from a distributed array is used to validate the proposed approach.

  3. Adaptive two-scale edge detection for visual pattern processing

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.

    2009-09-01

    Adaptive methods are defined and experimentally studied for a two-scale edge detection process that mimics human visual perception of edges and is inspired by the parvocellular (P) and magnocellular (M) physiological subsystems of natural vision. This two-channel processing consists of a high spatial acuity/coarse contrast channel (P) and a coarse acuity/fine contrast (M) channel. We perform edge detection after a very strong nonlinear image enhancement that uses smart Retinex image processing. Two conditions that arise from this enhancement demand adaptiveness in edge detection. These conditions are the presence of random noise further exacerbated by the enhancement process and the equally random occurrence of dense textural visual information. We examine how to best deal with both phenomena with an automatic adaptive computation that treats both high noise and dense textures as too much information and gracefully shifts from small-scale to medium-scale edge pattern priorities. This shift is accomplished by using different edge-enhancement schemes that correspond with the P- and M-channels of the human visual system. We also examine the case of adapting to a third image condition-namely, too little visual information-and automatically adjust edge-detection sensitivities when sparse feature information is encountered. When this methodology is applied to a sequence of images of the same scene but with varying exposures and lighting conditions, this edge-detection process produces pattern constancy that is very useful for several imaging applications that rely on image classification in variable imaging conditions.

  4. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  5. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  6. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1997-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1 sec. The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have been using adaptive optics (AO) on a 4-m class telescope to obtain 0.1 sec resolution images solar system objects at far red and near infrared wavelengths (0.7-2.5 micron) which best discriminate their spectral signatures. Our efforts has been put into areas of research for which high angular resolution is essential, such as the mapping of Titan and of large asteroids, the dynamics and composition of Neptune stratospheric clouds, the infrared photometry of Pluto, Charon, and close satellites previously undetected from the ground.

  7. Speckle image reconstruction of the adaptive optics solar images.

    PubMed

    Zhong, Libo; Tian, Yu; Rao, Changhui

    2014-11-17

    Speckle image reconstruction, in which the speckle transfer function (STF) is modeled as annular distribution according to the angular dependence of adaptive optics (AO) compensation and the individual STF in each annulus is obtained by the corresponding Fried parameter calculated from the traditional spectral ratio method, is used to restore the solar images corrected by AO system in this paper. The reconstructions of the solar images acquired by a 37-element AO system validate this method and the image quality is improved evidently. Moreover, we found the photometric accuracy of the reconstruction is field dependent due to the influence of AO correction. With the increase of angular separation of the object from the AO lockpoint, the relative improvement becomes approximately more and more effective and tends to identical in the regions far away the central field of view. The simulation results show this phenomenon is mainly due to the disparity of the calculated STF from the real AO STF with the angular dependence.

  8. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  9. Adaptive optics retinal imaging: emerging clinical applications.

    PubMed

    Godara, Pooja; Dubis, Adam M; Roorda, Austin; Duncan, Jacque L; Carroll, Joseph

    2010-12-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy and spectral domain-optical coherence tomography provide clinicians with remarkably clear pictures of the living retina. Although the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, the same optics induce significant aberrations that obviate cellular-resolution imaging in most cases. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. When applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, retinal pigment epithelium cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here, we review some of the advances that were made possible with AO imaging of the human retina and discuss applications and future prospects for clinical imaging.

  10. Adaptive system for eye-fundus imaging

    SciTech Connect

    Larichev, A V; Ivanov, P V; Iroshnikov, N G; Shmalgauzen, V I; Otten, L J

    2002-10-31

    A compact adaptive system capable of imaging a human-eye retina with a spatial resolution as high as 6 {mu}m and a field of view of 15{sup 0} is developed. It is shown that a modal bimorph corrector with nonlocalised response functions provides the efficient suppression of dynamic aberrations of a human eye. The residual root-mean-square error in correction of aberrations of a real eye with nonparalysed accommodation lies in the range of 0.1 - 0.15 {mu}m.

  11. Wavelet-Based Signal and Image Processing for Target Recognition

    DTIC Science & Technology

    2002-01-01

    in target recognition applications. Classical spatial and frequency domain image processing algorithms were generalized to process discrete wavelet ... transform (DWT) data. Results include adaptation of classical filtering, smoothing and interpolation techniques to DWT. From 2003 the research

  12. Imaging Radio Galaxies with Adaptive Optics

    NASA Astrophysics Data System (ADS)

    de Vries, W. H.; van Breugel, W. J. M.; Quirrenbach, A.; Roberts, J.; Fidkowski, K.

    2000-12-01

    We present 42 milli-arcsecond resolution Adaptive Optics near-infrared images of 3C 452 and 3C 294, two powerful radio galaxies at z=0.081 and z=1.79 respectively, obtained with the NIRSPEC/SCAM+AO instrument on the Keck telescope. The observations provide unprecedented morphological detail of radio galaxy components like nuclear dust-lanes, off-centered or binary nuclei, and merger induced starforming structures; all of which are key features in understanding galaxy formation and the onset of powerful radio emission. Complementary optical HST imaging data are used to construct high resolution color images, which, for the first time, have matching optical and near-IR resolutions. Based on these maps, the extra-nuclear structural morphologies and compositions of both galaxies are discussed. Furthermore, detailed brightness profile analysis of 3C 452 allows a direct comparison to a large literature sample of nearby ellipticals, all of which have been observed in the optical and near-IR by HST. Both the imaging data and the profile information on 3C 452 are consistent with it being a relative diminutive and well-evolved elliptical, in stark contrast to 3C 294 which seems to be in its initial formation throes with an active AGN off-centered from the main body of the galaxy. These results are discussed further within the framework of radio galaxy triggering and the formation of massive ellipticals. The work of WdV and WvB was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. The work at UCSD has been supported by the NSF Science and Technology Center for Adaptive Optics, under agreement No. AST-98-76783.

  13. Adaptive ladar receiver for multispectral imaging

    NASA Astrophysics Data System (ADS)

    Johnson, Kenneth; Vaidyanathan, Mohan; Xue, Song; Tennant, William E.; Kozlowski, Lester J.; Hughes, Gary W.; Smith, Duane D.

    2001-09-01

    We are developing a novel 2D focal plane array (FPA) with read-out integrated circuit (ROIC) on a single chip for 3D laser radar imaging. The ladar will provide high-resolution range and range-resolved intensity images for detection and identification of difficult targets. The initial full imaging-camera-on-a-chip system will be a 64 by 64 element, 100-micrometers pixel-size detector array that is directly bump bonded to a low-noise 64 by 64 array silicon CMOS-based ROIC. The architecture is scalable to 256 by 256 or higher arrays depending on the system application. The system will provide all the required electronic processing at pixel level and the smart FPA enables directly producing the 3D or 4D format data to be captured with a single laser pulse. The detector arrays are made of uncooled InGaAs PIN device for SWIR imaging at 1.5 micrometers wavelength and cooled HgCdTe PIN device for MWIR imaging at 3.8 micrometers wavelength. We are also investigating concepts using multi-color detector arrays for simultaneous imaging at multiple wavelengths that would provide additional spectral dimension capability for enhanced detection and identification of deep-hide targets. The system is suited for flash ladar imaging, for combat identification of ground targets from airborne platforms, flash-ladar imaging seekers, and autonomous robotic/automotive vehicle navigation and collision avoidance applications.

  14. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  15. Retinal imaging using adaptive optics technology☆

    PubMed Central

    Kozak, Igor

    2014-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  16. Retinal imaging using adaptive optics technology.

    PubMed

    Kozak, Igor

    2014-04-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started.

  17. Extreme Adaptive Optics Planet Imager: XAOPI

    SciTech Connect

    Macintosh, B A; Graham, J; Poyneer, L; Sommargren, G; Wilhelmsen, J; Gavel, D; Jones, S; Kalas, P; Lloyd, J; Makidon, R; Olivier, S; Palmer, D; Patience, J; Perrin, M; Severson, S; Sheinis, A; Sivaramakrishnan, A; Troy, M; Wallace, K

    2003-09-17

    Ground based adaptive optics is a potentially powerful technique for direct imaging detection of extrasolar planets. Turbulence in the Earth's atmosphere imposes some fundamental limits, but the large size of ground-based telescopes compared to spacecraft can work to mitigate this. We are carrying out a design study for a dedicated ultra-high-contrast system, the eXtreme Adaptive Optics Planet Imager (XAOPI), which could be deployed on an 8-10m telescope in 2007. With a 4096-actuator MEMS deformable mirror it should achieve Strehl >0.9 in the near-IR. Using an innovative spatially filtered wavefront sensor, the system will be optimized to control scattered light over a large radius and suppress artifacts caused by static errors. We predict that it will achieve contrast levels of 10{sup 7}-10{sup 8} at angular separations of 0.2-0.8 inches around a large sample of stars (R<7-10), sufficient to detect Jupiter-like planets through their near-IR emission over a wide range of ages and masses. We are constructing a high-contrast AO testbed to verify key concepts of our system, and present preliminary results here, showing an RMS wavefront error of <1.3 nm with a flat mirror.

  18. Adaptive wavelet transform algorithm for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo

    2003-11-01

    A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.

  19. A model for radar images and its application to adaptive digital filtering of multiplicative noise

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Stiles, J. A.; Shanmugan, K. S.; Holtzman, J. C.

    1982-01-01

    Standard image processing techniques which are used to enhance noncoherent optically produced images are not applicable to radar images due to the coherent nature of the radar imaging process. A model for the radar imaging process is derived in this paper and a method for smoothing noisy radar images is also presented. The imaging model shows that the radar image is corrupted by multiplicative noise. The model leads to the functional form of an optimum (minimum MSE) filter for smoothing radar images. By using locally estimated parameter values the filter is made adaptive so that it provides minimum MSE estimates inside homogeneous areas of an image while preserving the edge structure. It is shown that the filter can be easily implemented in the spatial domain and is computationally efficient. The performance of the adaptive filter is compared (qualitatively and quantitatively) with several standard filters using real and simulated radar images.

  20. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  1. Surface estimation methods with phased-arrays for adaptive ultrasonic imaging in complex components

    NASA Astrophysics Data System (ADS)

    Robert, S.; Calmon, P.; Calvo, M.; Le Jeune, L.; Iakovleva, E.

    2015-03-01

    Immersion ultrasonic testing of structures with complex geometries may be significantly improved by using phased-arrays and specific adaptive algorithms that allow to image flaws under a complex and unknown interface. In this context, this paper presents a comparative study of different Surface Estimation Methods (SEM) available in the CIVA software and used for adaptive imaging. These methods are based either on time-of-flight measurements or on image processing. We also introduce a generalized adaptive method where flaws may be fully imaged with half-skip modes. In this method, both the surface and the back-wall of a complex structure are estimated before imaging flaws.

  2. Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images

    NASA Astrophysics Data System (ADS)

    Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2016-10-01

    Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.

  3. Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images

    PubMed Central

    Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2016-01-01

    Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors. PMID:27786240

  4. Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images.

    PubMed

    Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2016-10-27

    Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.

  5. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  6. An adaptive optics imaging system designed for clinical use.

    PubMed

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R; Rossi, Ethan A

    2015-06-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2-3 arc minutes, (arcmin) 2) ~0.5-0.8 arcmin and, 3) ~0.05-0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3-5 arcmin, 2) ~0.7-1.1 arcmin and 3) ~0.07-0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing.

  7. An adaptive optics imaging system designed for clinical use

    PubMed Central

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R.; Rossi, Ethan A.

    2015-01-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2–3 arc minutes, (arcmin) 2) ~0.5–0.8 arcmin and, 3) ~0.05–0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3–5 arcmin, 2) ~0.7–1.1 arcmin and 3) ~0.07–0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing. PMID:26114033

  8. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-08-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  9. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  10. Adaptive image steganography using contourlet transform

    NASA Astrophysics Data System (ADS)

    Fakhredanesh, Mohammad; Rahmati, Mohammad; Safabakhsh, Reza

    2013-10-01

    This work presents adaptive image steganography methods which locate suitable regions for embedding by contourlet transform, while embedded message bits are carried in discrete cosine transform coefficients. The first proposed method utilizes contourlet transform coefficients to select contour regions of the image. In the embedding procedure, some of the contourlet transform coefficients may change which may cause errors at the message extraction phase. We propose a novel iterative procedure to resolve such problems. In addition, we have proposed an improved version of the first method in which it uses an advanced embedding operation to boost its security. Experimental results show that the proposed base method is an imperceptible image steganography method with zero retrieval error rate. Comparisons with other steganography methods which utilize contourlet transform show that our proposed method is able to retrieve all messages perfectly, whereas the others fail. Moreover, the proposed method outperforms the ContSteg method in terms of PSNR and the higher-order statistics steganalysis method. Experimental evaluations of our methods with the well known DCT-based steganography algorithms have demonstrated that our improved method has superior performance in terms of PSNR and SSIM, and is more secure against the steganalysis attack.

  11. Neural Adaptation Effects in Conceptual Processing.

    PubMed

    Marino, Barbara F M; Borghi, Anna M; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia

    2015-07-31

    We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view.

  12. Neural Adaptation Effects in Conceptual Processing

    PubMed Central

    Marino, Barbara F. M.; Borghi, Anna M.; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia

    2015-01-01

    We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view. PMID:26264031

  13. Adaptive textural segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Kuklinski, Walter S.; Frost, Gordon S.; MacLaughlin, Thomas

    1992-06-01

    A number of important problems in medical imaging can be described as segmentation problems. Previous fractal-based image segmentation algorithms have used either the local fractal dimension alone or the local fractal dimension and the corresponding image intensity as features for subsequent pattern recognition algorithms. An image segmentation algorithm that utilized the local fractal dimension, image intensity, and the correlation coefficient of the local fractal dimension regression analysis computation, to produce a three-dimension feature space that was partitioned to identify specific pixels of dental radiographs as being either bone, teeth, or a boundary between bone and teeth also has been reported. In this work we formulated the segmentation process as a configurational optimization problem and discuss the application of simulated annealing optimization methods to the solution of this specific optimization problem. The configurational optimization method allows information about both, the degree of correspondence between a candidate segment and an assumed textural model, and morphological information about the candidate segment to be used in the segmentation process. To apply this configurational optimization technique with a fractal textural model however, requires the estimation of the fractal dimension of an irregularly shaped candidate segment. The potential utility of a discrete Gerchberg-Papoulis bandlimited extrapolation algorithm to the estimation of the fractal dimension of an irregularly shaped candidate segment is also discussed.

  14. Short-Term Neural Adaptation to Simultaneous Bifocal Images

    PubMed Central

    Radhakrishnan, Aiswaryah; Dorronsoro, Carlos; Sawides, Lucie; Marcos, Susana

    2014-01-01

    Simultaneous vision is an increasingly used solution for the correction of presbyopia (the age-related loss of ability to focus near images). Simultaneous Vision corrections, normally delivered in the form of contact or intraocular lenses, project on the patient's retina a focused image for near vision superimposed with a degraded image for far vision, or a focused image for far vision superimposed with the defocused image of the near scene. It is expected that patients with these corrections are able to adapt to the complex Simultaneous Vision retinal images, although the mechanisms or the extent to which this happens is not known. We studied the neural adaptation to simultaneous vision by studying changes in the Natural Perceived Focus and in the Perceptual Score of image quality in subjects after exposure to Simultaneous Vision. We show that Natural Perceived Focus shifts after a brief period of adaptation to a Simultaneous Vision blur, similar to adaptation to Pure Defocus. This shift strongly correlates with the magnitude and proportion of defocus in the adapting image. The magnitude of defocus affects perceived quality of Simultaneous Vision images, with 0.5 D defocus scored lowest and beyond 1.5 D scored “sharp”. Adaptation to Simultaneous Vision shifts the Perceptual Score of these images towards higher rankings. Larger improvements occurred when testing simultaneous images with the same magnitude of defocus as the adapting images, indicating that wearing a particular bifocal correction improves the perception of images provided by that correction. PMID:24664087

  15. Adaptive Optics Imaging and Spectroscopy of Neptune

    NASA Technical Reports Server (NTRS)

    Johnson, Lindley (Technical Monitor); Sromovsky, Lawrence A.

    2005-01-01

    OBJECTIVES: We proposed to use high spectral resolution imaging and spectroscopy of Neptune in visible and near-IR spectral ranges to advance our understanding of Neptune s cloud structure. We intended to use the adaptive optics (AO) system at Mt. Wilson at visible wavelengths to try to obtain the first groundbased observations of dark spots on Neptune; we intended to use A 0 observations at the IRTF to obtain near-IR R=2000 spatially resolved spectra and near-IR A0 observations at the Keck observatory to obtain the highest spatial resolution studies of cloud feature dynamics and atmospheric motions. Vertical structure of cloud features was to be inferred from the wavelength dependent absorption of methane and hydrogen,

  16. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  17. Adaptive schemes for incomplete quantum process tomography

    SciTech Connect

    Teo, Yong Siah; Englert, Berthold-Georg; Rehacek, Jaroslav; Hradil, Zdenek

    2011-12-15

    We propose an iterative algorithm for incomplete quantum process tomography with the help of quantum state estimation. The algorithm, which is based on the combined principles of maximum likelihood and maximum entropy, yields a unique estimator for an unknown quantum process when one has less than a complete set of linearly independent measurement data to specify the quantum process uniquely. We apply this iterative algorithm adaptively in various situations and so optimize the amount of resources required to estimate a quantum process with incomplete data.

  18. Image denoising using a directional adaptive diffusion filter

    NASA Astrophysics Data System (ADS)

    Zhao, Cuifang; Shi, Caicheng; He, Peikun

    2006-11-01

    Partial differential equations (PDEs) are well-known due to their good processing results which it can not only smooth the noise but also preserve the edges. But the shortcomings of these processes came to being noticed by people. In some sense, PDE filter is called "cartoon model" as it produces an approximation of the input image, use the same diffusion model and parameters to process noise and signal because it can not differentiate them, therefore, the image is naturally modified toward piecewise constant functions. A new method called a directional adaptive diffusion filter is proposed in the paper, which combines PDE mode with wavelet transform. The undecimated discrete wavelet transform (UDWT) is carried out to get different frequency bands which have obviously directional selectivity and more redundancy details. Experimental results show that the proposed method provides a performance better to preserve textures, small details and global information.

  19. Industrial Applications of Image Processing

    NASA Astrophysics Data System (ADS)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  20. [Imaging center - optimization of the imaging process].

    PubMed

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams.

  1. Adaptive image denoising by targeted databases.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2015-07-01

    We propose a data-dependent denoising procedure to restore noisy images. Different from existing denoising algorithms which search for patches from either the noisy image or a generic database, the new algorithm finds patches from a database that contains relevant patches. We formulate the denoising problem as an optimal filter design problem and make two contributions. First, we determine the basis function of the denoising filter by solving a group sparsity minimization problem. The optimization formulation generalizes existing denoising algorithms and offers systematic analysis of the performance. Improvement methods are proposed to enhance the patch search process. Second, we determine the spectral coefficients of the denoising filter by considering a localized Bayesian prior. The localized prior leverages the similarity of the targeted database, alleviates the intensive Bayesian computation, and links the new method to the classical linear minimum mean squared error estimation. We demonstrate applications of the proposed method in a variety of scenarios, including text images, multiview images, and face images. Experimental results show the superiority of the new algorithm over existing methods.

  2. Modular on-board adaptive imaging

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Williams, D. S.

    1978-01-01

    Feature extraction involves the transformation of a raw video image to a more compact representation of the scene in which relevant information about objects of interest is retained. The task of the low-level processor is to extract object outlines and pass the data to the high-level process in a format that facilitates pattern recognition tasks. Due to the immense computational load caused by processing a 256x256 image, even a fast minicomputer requires a few seconds to complete this low-level processing. It is, therefore, necessary to consider hardware implementation of these low-level functions to achieve real-time processing speeds. The considered project had the objective to implement a system in which the continuous feature extraction process is not affected by the dynamic changes in the scene, varying lighting conditions, or object motion relative to the cameras. Due to the high bandwidth (3.5 MHz) and serial nature of the TV data, a pipeline processing scheme was adopted as the overall architecture of this system. Modularity in the system is achieved by designing circuits that are generic within the overall system.

  3. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  4. Image processing with COSMOS

    NASA Astrophysics Data System (ADS)

    Stobie, R. S.; Dodd, R. J.; MacGillivray, H. T.

    1981-12-01

    It is noted that astronomers have for some time been fascinated by the possibility of automatic plate measurement and that measuring engines have been constructed with an ever increasing degree of automation. A description is given of the COSMOS (CoOrdinates, Sizes, Magnitudes, Orientations, and Shapes) system at the Royal Observatory in Edinburgh. An automatic high-speed microdensitometer controlled by a minicomputer is linked to a very fast microcomputer that performs immediate image analysis. The movable carriage, whose position in two coordinates is controlled digitally to an accuracy of 0.5 micron (0.0005 mm) will take plates as large as 356 mm on a side. It is noted that currently the machine operates primarily in the Image Analysis Mode, in which COSMOS must first detect the presence of an image. It does this by scanning and digitizing the photograph in 'raster' fashion and then searching for local enhancements in the density of the exposed emulsion.

  5. Selected annotated bibliographies for adaptive filtering of digital image data

    USGS Publications Warehouse

    Mayers, Margaret; Wood, Lynnette

    1988-01-01

    Digital spatial filtering is an important tool both for enhancing the information content of satellite image data and for implementing cosmetic effects which make the imagery more interpretable and appealing to the eye. Spatial filtering is a context-dependent operation that alters the gray level of a pixel by computing a weighted average formed from the gray level values of other pixels in the immediate vicinity.Traditional spatial filtering involves passing a particular filter or set of filters over an entire image. This assumes that the filter parameter values are appropriate for the entire image, which in turn is based on the assumption that the statistics of the image are constant over the image. However, the statistics of an image may vary widely over the image, requiring an adaptive or "smart" filter whose parameters change as a function of the local statistical properties of the image. Then a pixel would be averaged only with more typical members of the same population. This annotated bibliography cites some of the work done in the area of adaptive filtering. The methods usually fall into two categories, (a) those that segment the image into subregions, each assumed to have stationary statistics, and use a different filter on each subregion, and (b) those that use a two-dimensional "sliding window" to continuously estimate the filter either the spatial or frequency domain, or may utilize both domains. They may be used to deal with images degraded by space variant noise, to suppress undesirable local radiometric statistics while enforcing desirable (user-defined) statistics, to treat problems where space-variant point spread functions are involved, to segment images into regions of constant value for classification, or to "tune" images in order to remove (nonstationary) variations in illumination, noise, contrast, shadows, or haze.Since adpative filtering, like nonadaptive filtering, is used in image processing to accomplish various goals, this bibliography

  6. Statistical Image Processing.

    DTIC Science & Technology

    1982-11-16

    spectral analysist texture image analysis and classification, __ image software package, automatic spatial clustering.ITWA domenit hi ba apa for...ICOLOR(256),IBW(256) 1502 FORMATO (30( CNO(N): fF12.1)) 1503 FORMAT(o *FMINo DMRGE:0f2E20.8) 1504 FORMAT(/o IMRGE:or15) 1505 FOR14ATV FIRST SUBIMAGE:v...1506 FORMATO ’ JOIN CLUSTER NL:0) 1507 FORMAT( NEW CLUSTER:O) 1508 FORMAT( LLBS.GE.600) 1532 FORMAT(15XoTHETA ,7X, SIGMA-SQUAREr3Xe MERGING-DISTANCE

  7. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  8. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  9. The new adaptive enhancement algorithm on the degraded color images

    NASA Astrophysics Data System (ADS)

    Xue, Rong Kun; He, Wei; Li, Yufeng

    2016-10-01

    Based on the scene characteristics of frequency distribution in the degraded color images, the MSRCR method and wavelet transform in the paper are introduced respectively to enhance color images and the advantages and disadvantages of them are analyzed combining with the experiment, then the combination of improved MSRCR method and wavelet transform are proposed to enhance color images, it uses wavelet to decompose color images in order to increase the coefficient of low-level details and reduce top-level details to highlight the scene information, meanwhile, the method of improved MSRCR is used to enhance the low-frequency components of degraded images processed by wavelet, then the adaptive equalization is carried on to further enhance images, finally, the enhanced color images are acquired with the reconstruction of all the coefficients brought by the wavelet transform. Through the evaluation of the experimental results and data analysis, it shows that the method proposed in the paper is better than the separate use of wavelet transform and MSRCR method.

  10. Industrial applications of process imaging and image processing

    NASA Astrophysics Data System (ADS)

    Scott, David M.; Sunshine, Gregg; Rosen, Lou; Jochen, Ed

    2001-02-01

    Process imaging is the art of visualizing events inside closed industrial processes. Image processing is the art of mathematically manipulating digitized images to extract quantitative information about such processes. Ongoing advances in camera and computer technology have made it feasible to apply these abilities to measurement needs in the chemical industry. To illustrate the point, this paper describes several applications developed at DuPont, where a variety of measurements are based on in-line, at-line, and off-line imaging. Application areas include compounding, melt extrusion, crystallization, granulation, media milling, and particle characterization. Polymer compounded with glass fiber is evaluated by a patented radioscopic (real-time X-ray imaging) technique to measure concentration and dispersion uniformity of the glass. Contamination detection in molten polymer (important for extruder operations) is provided by both proprietary and commercial on-line systems. Crystallization in production reactors is monitored using in-line probes and flow cells. Granulation is controlled by at-line measurements of granule size obtained from image processing. Tomographic imaging provides feedback for improved operation of media mills. Finally, particle characterization is provided by a robotic system that measures individual size and shape for thousands of particles without human supervision. Most of these measurements could not be accomplished with other (non-imaging) techniques.

  11. Photorefractive processing for large adaptive phased arrays

    NASA Astrophysics Data System (ADS)

    Weverka, Robert T.; Wagner, Kelvin; Sarto, Anthony

    1996-03-01

    An adaptive null-steering phased-array optical processor that utilizes a photorefractive crystal to time integrate the adaptive weights and null out correlated jammers is described. This is a beam-steering processor in which the temporal waveform of the desired signal is known but the look direction is not. The processor computes the angle(s) of arrival of the desired signal and steers the array to look in that direction while rotating the nulls of the antenna pattern toward any narrow-band jammers that may be present. We have experimentally demonstrated a simplified version of this adaptive phased-array-radar processor that nulls out the narrow-band jammers by using feedback-correlation detection. In this processor it is assumed that we know a priori only that the signal is broadband and the jammers are narrow band. These are examples of a class of optical processors that use the angular selectivity of volume holograms to form the nulls and look directions in an adaptive phased-array-radar pattern and thereby to harness the computational abilities of three-dimensional parallelism in the volume of photorefractive crystals. The development of this processing in volume holographic system has led to a new algorithm for phased-array-radar processing that uses fewer tapped-delay lines than does the classic time-domain beam former. The optical implementation of the new algorithm has the further advantage of utilization of a single photorefractive crystal to implement as many as a million adaptive weights, allowing the radar system to scale to large size with no increase in processing hardware.

  12. Radiological image presentation requires consideration of human adaptation characteristics

    NASA Astrophysics Data System (ADS)

    O'Connell, N. M.; Toomey, R. J.; McEntee, M.; Ryan, J.; Stowe, J.; Adams, A.; Brennan, P. C.

    2008-03-01

    Visualisation of anatomical or pathological image data is highly dependent on the eye's ability to discriminate between image brightnesses and this is best achieved when these data are presented to the viewer at luminance levels to which the eye is adapted. Current ambient light recommendations are often linked to overall monitor luminance but this relies on specific regions of interest matching overall monitor brightness. The current work investigates the luminances of specific regions of interest within three image-types: postero-anterior (PA) chest; PA wrist; computerised tomography (CT) of the head. Luminance levels were measured within the hilar region and peripheral lung distal radius and supra-ventricular grey matter. For each image type average monitor luminances were calculated with a calibrated photometer at ambient light levels of 0, 100 and 400 lux. Thirty samples of each image-type were employed, resulting in a total of over 6,000 measurements. Results demonstrate that average monitor luminances varied from clinically-significant values by up to a factor of 4, 2 and 6 for chest, wrist and CT head images respectively. Values for the thoracic hilum and wrist were higher and for the peripheral lung and CT brain lower than overall monitor levels. The ambient light level had no impact on the results. The results demonstrate that clinically important radiological information for common radiological examinations is not being presented to the viewer in a way that facilitates optimised visual adaptation and subsequent interpretation. The importance of image-processing algorithms focussing on clinically-significant anatomical regions instead of radiographic projections is highlighted.

  13. Adaptive photoacoustic imaging quality optimization with EMD and reconstruction

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.

    2016-10-01

    Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.

  14. Contrast Adaptation Implies Two Spatiotemporal Channels but Three Adapting Processes

    ERIC Educational Resources Information Center

    Langley, Keith; Bex, Peter J.

    2007-01-01

    The contrast gain control model of adaptation predicts that the effects of contrast adaptation correlate with contrast sensitivity. This article reports that the effects of high contrast spatiotemporal adaptors are maximum when adapting around 19 Hz, which is a factor of two or more greater than the peak in contrast sensitivity. To explain the…

  15. Advanced communications technologies for image processing

    NASA Technical Reports Server (NTRS)

    Likens, W. C.; Jones, H. W.; Shameson, L.

    1984-01-01

    It is essential for image analysts to have the capability to link to remote facilities as a means of accessing both data bases and high-speed processors. This can increase productivity through enhanced data access and minimization of delays. New technology is emerging to provide the high communication data rates needed in image processing. These developments include multi-user sharing of high bandwidth (60 megabits per second) Time Division Multiple Access (TDMA) satellite links, low-cost satellite ground stations, and high speed adaptive quadrature modems that allow 9600 bit per second communications over voice-grade telephone lines.

  16. Motor adaptation as a process of reoptimization.

    PubMed

    Izawa, Jun; Rane, Tushar; Donchin, Opher; Shadmehr, Reza

    2008-03-12

    Adaptation is sometimes viewed as a process in which the nervous system learns to predict and cancel effects of a novel environment, returning movements to near baseline (unperturbed) conditions. An alternate view is that cancellation is not the goal of adaptation. Rather, the goal is to maximize performance in that environment. If performance criteria are well defined, theory allows one to predict the reoptimized trajectory. For example, if velocity-dependent forces perturb the hand perpendicular to the direction of a reaching movement, the best reach plan is not a straight line but a curved path that appears to overcompensate for the forces. If this environment is stochastic (changing from trial to trial), the reoptimized plan should take into account this uncertainty, removing the overcompensation. If the stochastic environment is zero-mean, peak velocities should increase to allow for more time to approach the target. Finally, if one is reaching through a via-point, the optimum plan in a zero-mean deterministic environment is a smooth movement but in a zero-mean stochastic environment is a segmented movement. We observed all of these tendencies in how people adapt to novel environments. Therefore, motor control in a novel environment is not a process of perturbation cancellation. Rather, the process resembles reoptimization: through practice in the novel environment, we learn internal models that predict sensory consequences of motor commands. Through reward-based optimization, we use the internal model to search for a better movement plan to minimize implicit motor costs and maximize rewards.

  17. Coping and adaptation process during puerperium

    PubMed Central

    Muñoz de Rodríguez, Lucy; Ruiz de Cárdenas, Carmen Helena

    2012-01-01

    Introduction: The puerperium is a stage that produces changes and adaptations in women, couples and family. Effective coping, during this stage, depends on the relationship between the demands of stressful or difficult situations and the recourses that the puerperal individual has. Roy (2004), in her Middle Range Theory about the Coping and Adaptation Processing, defines Coping as the ''behavioral and cognitive efforts that a person makes to meet the environment demands''. For the puerperal individual, the correct coping is necessary to maintain her physical and mental well being, especially against situations that can be stressful like breastfeeding and return to work. According to Lazarus and Folkman (1986), a resource for coping is to have someone who receives emotional support, informative and / or tangible. Objective: To review the issue of women coping and adaptation during the puerperium stage and the strategies that enhance this adaptation. Methods: search and selection of database articles: Cochrane, Medline, Ovid, ProQuest, Scielo, and Blackwell Synergy. Other sources: unpublished documents by Roy, published books on Roy´s Model, Websites from of international health organizations. Results: the need to recognize the puerperium as a stage that requires comprehensive care is evident, where nurses must be protagonist with the care offered to women and their families, considering the specific demands of this situation and recourses that promote effective coping and the family, education and health services. PMID:24893059

  18. Image processing: some challenging problems.

    PubMed Central

    Huang, T S; Aizawa, K

    1993-01-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing. PMID:8234312

  19. Image Processing: Some Challenging Problems

    NASA Astrophysics Data System (ADS)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  20. An Adaptive Framework for Image and Video Sensing

    DTIC Science & Technology

    2005-03-01

    bandwidth on the camera transmission or memory is not optimally utilized. In this paper we outline a framework for an adaptive sensor where the spatial and...scene can be realized, with small distortion. Keywords: Adaptive Imaging, Varying Sampling Rate, Image Content Measure, Scene Adaptive, Camera ...second order effect on the spatio-temporal trade-off. Figure 1 is an example of the spatio-temporal sampling rate tradeoff in a typical camera (e.g

  1. Image processing for optical mapping.

    PubMed

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  2. Image Processing REST Web Services

    DTIC Science & Technology

    2013-03-01

    collections, deblurring, contrast enhancement, and super resolution. 2 1. Original Image with Target Chip to Super Resolve 2. Unenhanced...extracted target chip 3. Super-resolved target chip 4. Super-resolved, deblurred target chip 5. Super-resolved, deblurred and contrast...enhanced target chip Image 1. Chaining the image processing algorithms. 3 2. Resources There are two types of resources associated with these

  3. “Lucky Averaging”: Quality improvement on Adaptive Optics Scanning Laser Ophthalmoscope Images

    PubMed Central

    Huang, Gang; Zhong, Zhangyi; Zou, Weiyao; Burns, Stephen A.

    2012-01-01

    Adaptive optics(AO) has greatly improved retinal image resolution. However, even with AO, temporal and spatial variations in image quality still occur due to wavefront fluctuations, intra-frame focus shifts and other factors. As a result, aligning and averaging images can produce a mean image that has lower resolution or contrast than the best images within a sequence. To address this, we propose an image post-processing scheme called “lucky averaging”, analogous to lucky imaging (Fried, 1978) based on computing the best local contrast over time. Results from eye data demonstrate improvements in image quality. PMID:21964097

  4. SOFT-1: Imaging Processing Software

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Five levels of image processing software are enumerated and discussed: (1) logging and formatting; (2) radiometric correction; (3) correction for geometric camera distortion; (4) geometric/navigational corrections; and (5) general software tools. Specific concerns about access to and analysis of digital imaging data within the Planetary Data System are listed.

  5. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  6. Photographic image enhancement and processing

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1975-01-01

    Image processing techniques (computer and photographic) are described which are used within the JSC Photographic Technology Division. Two purely photographic techniques used for specific subject isolation are discussed in detail. Sample imagery is included.

  7. An adaptive filtered back-projection for photoacoustic image reconstruction

    SciTech Connect

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-05-15

    the correct signal strength of the absorbers. The reconstructed image of the second phantom further demonstrates the capability to form clear images of the spheres with sharp borders in the overlapping geometry. The smallest sphere is clearly visible and distinguishable, even though it is surrounded by two big spheres. In addition, image reconstructions were conducted with randomized noise added to the observed signals to mimic realistic experimental conditions. Conclusions: The authors have developed a new FBP algorithm that is capable for reconstructing high quality images with correct relative intensities and sharp borders for PAT. The results demonstrate that the weighting function serves as a precise ramp filter for processing the observed signals in the Fourier domain. In addition, this algorithm allows an adaptive determination of the cutoff frequency for the applied low pass filter.

  8. An adaptive filtered back-projection for photoacoustic image reconstruction

    PubMed Central

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-01-01

    the correct signal strength of the absorbers. The reconstructed image of the second phantom further demonstrates the capability to form clear images of the spheres with sharp borders in the overlapping geometry. The smallest sphere is clearly visible and distinguishable, even though it is surrounded by two big spheres. In addition, image reconstructions were conducted with randomized noise added to the observed signals to mimic realistic experimental conditions. Conclusions: The authors have developed a new FBP algorithm that is capable for reconstructing high quality images with correct relative intensities and sharp borders for PAT. The results demonstrate that the weighting function serves as a precise ramp filter for processing the observed signals in the Fourier domain. In addition, this algorithm allows an adaptive determination of the cutoff frequency for the applied low pass filter. PMID:25979011

  9. Sgraffito simulation through image processing

    NASA Astrophysics Data System (ADS)

    Guerrero, Roberto A.; Serón Arbeloa, Francisco J.

    2011-10-01

    This paper presents a tool for simulating the traditional Sgraffito technique through digital image processing. The tool is based on a digital image pile and a set of attributes recovered from the image at the bottom of the pile using the Streit and Buchanan multiresolution image pyramid. This technique tries to preserve the principles of artistic composition by means of the attributes of color, luminance and shape recovered from the foundation image. A couple of simulated scratching objects will establish how the recovered attributes have to be painted. Different attributes can be painted by using different scratching primitives. The resulting image will be a colorimetric composition reached from the image on the top of the pile, the color of the images revealed by scratching and the inner characteristics of each scratching primitive. The technique combines elements of image processing, art and computer graphics allowing users to make their own free compositions and providing a means for the development of visual communication skills within the user-observer relationship. The technique enables the application of the given concepts in non artistic fields with specific subject tools.

  10. Adaptive feature-specific imaging: a face recognition example.

    PubMed

    Baheti, Pawan K; Neifeld, Mark A

    2008-04-01

    We present an adaptive feature-specific imaging (AFSI) system and consider its application to a face recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. Using sequential hypothesis testing, we compare AFSI with static-FSI (SFSI) and static or adaptive conventional imaging in terms of the number of measurements required to achieve a specified probability of misclassification (Pe). The AFSI system exhibits significant improvement compared to SFSI and conventional imaging at low signal-to-noise ratio (SNR). It is shown that for M=4 hypotheses and desired Pe=10(-2), AFSI requires 100 times fewer measurements than the adaptive conventional imager at SNR= -20 dB. We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage, resulting in an optimal value of integration time (equivalent to SNR) per measurement.

  11. Robust adaptive digital watermark for still images using hybrid modulation

    NASA Astrophysics Data System (ADS)

    Alturki, Faisal T.; Mersereau, Russell M.

    2001-08-01

    A digital watermark is a short sequence of information containing an owner identity or copyright information embedded in a way that is difficult to erase. We present a new oblivious digital watermarking technique for copyright protection of still images. The technique embeds the watermark in a subset of low to mid frequency coefficients. A key is used to randomly select a group of coefficients from that subset for watermark embedding. The original phases of the selected coefficients are removed and the new phases are set in accordance with the embedded watermark. Since the coefficients are selected at random, the powers of the low magnitude coefficients are increased to enhance their immunity against image attacks. To cope with small geometric attacks, a replica of the watermark is embedded by dividing the image into sub-blocks and taking the DCT of these blocks. The watermark is embedded in the DC component of some of these blocks selected in an adaptive way using quantization techniques. A major advantage of this technique is its complete suppression of the noise due to the host image. The robustness of the technique to a number of standard image processing attacks is demonstrated using the criteria of the latest Stirmark benchmark test.

  12. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  13. Optimization of exposure in panoramic radiography while maintaining image quality using adaptive filtering.

    PubMed

    Svenson, Björn; Larsson, Lars; Båth, Magnus

    2016-01-01

    Objective The purpose of the present study was to investigate the potential of using advanced external adaptive image processing for maintaining image quality while reducing exposure in dental panoramic storage phosphor plate (SPP) radiography. Materials and methods Thirty-seven SPP radiographs of a skull phantom were acquired using a Scanora panoramic X-ray machine with various tube load, tube voltage, SPP sensitivity and filtration settings. The radiographs were processed using General Operator Processor (GOP) technology. Fifteen dentists, all within the dental radiology field, compared the structural image quality of each radiograph with a reference image on a 5-point rating scale in a visual grading characteristics (VGC) study. The reference image was acquired with the acquisition parameters commonly used in daily operation (70 kVp, 150 mAs and sensitivity class 200) and processed using the standard process parameters supplied by the modality vendor. Results All GOP-processed images with similar (or higher) dose as the reference image resulted in higher image quality than the reference. All GOP-processed images with similar image quality as the reference image were acquired at a lower dose than the reference. This indicates that the external image processing improved the image quality compared with the standard processing. Regarding acquisition parameters, no strong dependency of the image quality on the radiation quality was seen and the image quality was mainly affected by the dose. Conclusions The present study indicates that advanced external adaptive image processing may be beneficial in panoramic radiography for increasing the image quality of SPP radiographs or for reducing the exposure while maintaining image quality.

  14. Image processing using reconfigurable FPGAs

    NASA Astrophysics Data System (ADS)

    Ferguson, Lee

    1996-10-01

    The use of reconfigurable field-programmable gate arrays (FPGAs) for imaging applications show considerable promise to fill the gap that often occurs when digital signal processor chips fail to meet performance specifications. Single chip DSPs do not have the overall performance to meet the needs of many imaging applications, particularly in real-time designs. Using multiple DSPs to boost performance often presents major design challenges in maintaining data alignment and process synchronization. These challenges can impose serious cost, power consumption and board space penalties. Image processing requires manipulating massive amounts of data at high-speed. Although DSP chips can process data at high-speeds, their architectures can inhibit overall system performance in real-time imaging. The rate of operations can be increased when they are performed in dedicated hardware, such as special-purpose imaging devices and FPGAs, which provides the horsepower necessary to implement real-time image processing products successfully and cost-effectively. For many fixed applications, non-SRAM- based (antifuse or flash-based) FPGAs provide the raw speed to accomplish standard high-speed functions. However, in applications where algorithms are continuously changing and compute operations must be modified, only SRAM-based FPGAs give enough flexibility. The addition of reconfigurable FPGAs as a flexible hardware facility enables DSP chips to perform optimally. The benefits primarily stem from optimizing the hardware for the algorithms or the use of reconfigurable hardware to enhance the product architecture. And with SRAM-based FPGAs that are capable of partial dynamic reconfiguration, such as the Cache-Logic FPGAs from Atmel, continuous modification of data and logic is not only possible, it is practical as well. First we review the particular demands of image processing. Then we present various applications and discuss strategies for exploiting the capabilities of

  15. Integrating image processing in PACS.

    PubMed

    Faggioni, Lorenzo; Neri, Emanuele; Cerri, Francesca; Turini, Francesca; Bartolozzi, Carlo

    2011-05-01

    Integration of RIS and PACS services into a single solution has become a widespread reality in daily radiological practice, allowing substantial acceleration of workflow with greater ease of work compared with older generation film-based radiological activity. In particular, the fast and spectacular recent evolution of digital radiology (with special reference to cross-sectional imaging modalities, such as CT and MRI) has been paralleled by the development of integrated RIS--PACS systems with advanced image processing tools (either two- and/or three-dimensional) that were an exclusive task of costly dedicated workstations until a few years ago. This new scenario is likely to further improve productivity in the radiology department with reduction of the time needed for image interpretation and reporting, as well as to cut costs for the purchase of dedicated standalone image processing workstations. In this paper, a general description of typical integrated RIS--PACS architecture with image processing capabilities will be provided, and the main available image processing tools will be illustrated.

  16. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-09-25

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  17. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  18. Enhanced imaging process for xeroradiography

    NASA Astrophysics Data System (ADS)

    Fender, William D.; Zanrosso, Eddie M.

    1993-09-01

    An enhanced mammographic imaging process has been developed which is based on the conventional powder-toner selenium technology used in the Xerox 125/126 x-ray imaging system. The process is derived from improvements in the amorphous selenium x-ray photoconductor, the blue powder toner and the aerosol powder dispersion process. Comparisons of image quality and x-ray dose using the Xerox aluminum-wedge breast phantom and the Radiation Measurements Model 152D breast phantom have been made between the new Enhanced Process, the standard Xerox 125/126 System and screen-film at mammographic x-ray exposure parameters typical for each modality. When comparing the Enhanced Xeromammographic Process with the standard 125/126 System, a distinct advantage is seen for the Enhanced equivalent mass detection and superior fiber and speck detection. The broader imaging latitude of enhanced and standard Xeroradiography, in comparison to film, is illustrated in images made using the aluminum-wedge breast phantom.

  19. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  20. Translational motion compensation in ISAR image processing.

    PubMed

    Wu, H; Grenier, D; Delisle, G Y; Fang, D G

    1995-01-01

    In inverse synthetic aperture radar (ISAR) imaging, the target rotational motion with respect to the radar line of sight contributes to the imaging ability, whereas the translational motion must be compensated out. This paper presents a novel two-step approach to translational motion compensation using an adaptive range tracking method for range bin alignment and a recursive multiple-scatterer algorithm (RMSA) for signal phase compensation. The initial step of RMSA is equivalent to the dominant-scatterer algorithm (DSA). An error-compensating point source is then recursively synthesized from the selected range bins, where each contains a prominent scatterer. Since the clutter-induced phase errors are reduced by phase averaging, the image speckle noise can be reduced significantly. Experimental data processing for a commercial aircraft and computer simulations confirm the validity of the approach.

  1. Satellite Imaging with Adaptive Optics on a 1 M Telescope

    NASA Astrophysics Data System (ADS)

    Bennet, F.; Price, I.; Rigaut, F.; Copeland, M.

    2016-09-01

    The Research School of Astronomy and Astrophysics at the Mount Stromlo Observatory in Canberra, Australia, have been developing adaptive optic (AO) systems for space situational awareness applications. We report on the development and demonstration of an AO system for satellite imaging using a 1 m telescope. The system uses the orbiting object as a natural guide star to measure atmospheric turbulence, and a deformable mirror to provide an optical correction. The AO system utilised modern, high speed and low noise EMCCD technology on both the wavefront sensor and imaging camera to achieve high performance, achieving a Strehl ratio in excess of 30% at 870 nm. Images are post processed with lucky imaging algorithms to further improve the final image quality. We demonstrate the AO system on stellar targets and Iridium satellites, achieving a near diffraction limited full width at half maximum. A specialised realtime controller allows our system to achieve a bandwidth above 100 Hz, with the wavefront sensor and control loop running at 2 kHz. The AO systems we are developing show how ground-based optical sensors can be used to manage the space environment. AO imaging systems can be used for satellite surveillance, while laser ranging can be used to determine precise orbital data used in the critical conjunction analysis required to maintain a safe space environment. We have focused on making this system compact, expandable, and versatile. We are continuing to develop this platform for other space situational awareness applications such as geosynchronous satellite astrometry, space debris characterisation, satellite imaging, and ground-to-space laser communication.

  2. Image Processing Language. Phase 2.

    DTIC Science & Technology

    1988-11-01

    knowledge engineering of coherent collections of methodological tools as they appear in the literature, and the implementation of expert knowledge in...knowledge representation becomes even more desirable. The role of morphology ( Reference 30 as a knowledge formalization tool is another area which is...sets of image processing algorithms. These analyses are to be carried out in several modes including a complete translation to image algebra machine

  3. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  4. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  5. Speckle statistics in adaptive optics images at visible wavelengths

    NASA Astrophysics Data System (ADS)

    Stangalini, Marco; Pedichini, Fernando; Ambrosino, Filippo; Centrone, Mauro; Del Moro, Dario

    2016-07-01

    Residual speckles in adaptive optics (AO) images represent a well known limitation to the achievement of the contrast needed for faint stellar companions detection. Speckles in AO imagery can be the result of either residual atmospheric aberrations, not corrected by the AO, or slowly evolving aberrations induced by the optical system. In this work we take advantage of new high temporal cadence (1 ms) data acquired by the SHARK forerunner experiment at the Large Binocular Telescope (LBT), to characterize the AO residual speckles at visible waveleghts. By means of an automatic identification of speckles, we study the main statistical properties of AO residuals. In addition, we also study the memory of the process, and thus the clearance time of the atmospheric aberrations, by using information Theory. These information are useful for increasing the realism of numerical simulations aimed at assessing the instrumental performances, and for the application of post-processing techniques on AO imagery.

  6. Image processing of galaxy photographs

    NASA Technical Reports Server (NTRS)

    Arp, H.; Lorre, J.

    1976-01-01

    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  7. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  8. Phase in Optical Image Processing

    NASA Astrophysics Data System (ADS)

    Naughton, Thomas J.

    2010-04-01

    The use of phase has a long standing history in optical image processing, with early milestones being in the field of pattern recognition, such as VanderLugt's practical construction technique for matched filters, and (implicitly) Goodman's joint Fourier transform correlator. In recent years, the flexibility afforded by phase-only spatial light modulators and digital holography, for example, has enabled many processing techniques based on the explicit encoding and decoding of phase. One application area concerns efficient numerical computations. Pushing phase measurement to its physical limits, designs employing the physical properties of phase have ranged from the sensible to the wonderful, in some cases making computationally easy problems easier to solve and in other cases addressing mathematics' most challenging computationally hard problems. Another application area is optical image encryption, in which, typically, a phase mask modulates the fractional Fourier transformed coefficients of a perturbed input image, and the phase of the inverse transform is then sensed as the encrypted image. The inherent linearity that makes the system so elegant mitigates against its use as an effective encryption technique, but we show how a combination of optical and digital techniques can restore confidence in that security. We conclude with the concept of digital hologram image processing, and applications of same that are uniquely suited to optical implementation, where the processing, recognition, or encryption step operates on full field information, such as that emanating from a coherently illuminated real-world three-dimensional object.

  9. Self-adaptive image denoising based on bidimensional empirical mode decomposition (BEMD).

    PubMed

    Guo, Song; Luan, Fangjun; Song, Xiaoyu; Li, Changyou

    2014-01-01

    To better analyze images with the Gaussian white noise, it is necessary to remove the noise before image processing. In this paper, we propose a self-adaptive image denoising method based on bidimensional empirical mode decomposition (BEMD). Firstly, normal probability plot confirms that 2D-IMF of Gaussian white noise images decomposed by BEMD follow the normal distribution. Secondly, energy estimation equation of the ith 2D-IMF (i=2,3,4,......) is proposed referencing that of ith IMF (i=2,3,4,......) obtained by empirical mode decomposition (EMD). Thirdly, the self-adaptive threshold of each 2D-IMF is calculated. Eventually, the algorithm of the self-adaptive image denoising method based on BEMD is described. From the practical perspective, this is applied for denoising of the magnetic resonance images (MRI) of the brain. And the results show it has a better denoising performance compared with other methods.

  10. An adaptive optics biomicroscope for mouse retinal imaging

    NASA Astrophysics Data System (ADS)

    Biss, David P.; Webb, Robert H.; Zhou, Yaopeng; Bifano, Thomas G.; Zamiri, Parisa; Lin, Charles P.

    2007-02-01

    In studying retinal disease on a microscopic level, in vivo imaging has allowed researchers to track disease progression in a single animal over time without sacrificing large numbers of animals for statistical studies. Historically, a drawback of in vivo retinal imaging, when compared to ex vivo imaging, is decreased image resolution due to aberrations present in the mouse eye. Adaptive optics has successfully corrected phase aberrations introduced the eye in ophthalmic imaging in humans. We are using adaptive optics to correct for aberrations introduced by the mouse eye in hopes of achieving cellular resolution retinal images of mice in vivo. In addition to using a wavefront sensor to drive the adaptive optic element, we explore the using image data to correct for wavefront aberrations introduced by the mouse eye. Image data, in the form of the confocal detection pinhole intensity are used as the feedback mechanism to control the MEMS deformable mirror in the adaptive optics system. Correction for wavefront sensing and sensor-less adaptive optics systems are presented.

  11. Adaptive SVD-Based Digital Image Watermarking

    NASA Astrophysics Data System (ADS)

    Shirvanian, Maliheh; Torkamani Azar, Farah

    Digital data utilization along with the increase popularity of the Internet has facilitated information sharing and distribution. However, such applications have also raised concern about copyright issues and unauthorized modification and distribution of digital data. Digital watermarking techniques which are proposed to solve these problems hide some information in digital media and extract it whenever needed to indicate the data owner. In this paper a new method of image watermarking based on singular value decomposition (SVD) of images is proposed which considers human visual system prior to embedding watermark by segmenting the original image into several blocks of different sizes, with more density in the edges of the image. In this way the original image quality is preserved in the watermarked image. Additional advantages of the proposed technique are large capacity of watermark embedding and robustness of the method against different types of image manipulation techniques.

  12. Coherent Image Layout using an Adaptive Visual Vocabulary

    SciTech Connect

    Dillard, Scott E.; Henry, Michael J.; Bohn, Shawn J.; Gosink, Luke J.

    2013-03-06

    When querying a huge image database containing millions of images, the result of the query may still contain many thousands of images that need to be presented to the user. We consider the problem of arranging such a large set of images into a visually coherent layout, one that places similar images next to each other. Image similarity is determined using a bag-of-features model, and the layout is constructed from a hierarchical clustering of the image set by mapping an in-order traversal of the hierarchy tree into a space-filling curve. This layout method provides strong locality guarantees so we are able to quantitatively evaluate performance using standard image retrieval benchmarks. Performance of the bag-of-features method is best when the vocabulary is learned on the image set being clustered. Because learning a large, discriminative vocabulary is a computationally demanding task, we present a novel method for efficiently adapting a generic visual vocabulary to a particular dataset. We evaluate our clustering and vocabulary adaptation methods on a variety of image datasets and show that adapting a generic vocabulary to a particular set of images improves performance on both hierarchical clustering and image retrieval tasks.

  13. Adaptive and compressive matched field processing.

    PubMed

    Gemba, Kay L; Hodgkiss, William S; Gerstoft, Peter

    2017-01-01

    Matched field processing is a generalized beamforming method that matches received array data to a dictionary of replica vectors in order to locate one or more sources. Its solution set is sparse since there are considerably fewer sources than replicas. Using compressive sensing (CS) implemented using basis pursuit, the matched field problem is reformulated as an underdetermined, convex optimization problem. CS estimates the unknown source amplitudes using the replica dictionary to best explain the data, subject to a row-sparsity constraint. This constraint selects the best matching replicas within the dictionary when using multiple observations and/or frequencies. For a single source, theory and simulations show that the performance of CS and the Bartlett processor are equivalent for any number of snapshots. Contrary to most adaptive processors, CS also can accommodate coherent sources. For a single and multiple incoherent sources, simulations indicate that CS offers modest localization performance improvement over the adaptive white noise constraint processor. SWellEx-96 experiment data results show comparable performance for both processors when localizing a weaker source in the presence of a stronger source. Moreover, CS often displays less ambiguity, demonstrating it is robust to data-replica mismatch.

  14. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  15. Single Cell Imaging of the Chick Retina with Adaptive Optics

    PubMed Central

    Headington, Kenneth; Choi, Stacey S.; Nickla, Debora; Doble, Nathan

    2012-01-01

    Purpose The chick eye is extensively used as a model in the study of myopia and its progression; however, analysis of the photoreceptor mosaic has required the use of excised retina due to the uncorrected optical aberrations in the lens and cornea. This study implemented high resolution adaptive optics (AO) retinal imaging to visualize the chick cone mosaic in vivo. Methods The New England College of Optometry (NECO) AO fundus camera was modified to allow high resolution in vivo imaging on 2 six-week-old White Leghorn chicks (Gallus gallus domesticus) – labeled chick A and chick B. Multiple, adjacent images, each with a 2.5° field of view, were taken and subsequently montaged together. This process was repeated at varying retinal locations measured from the tip of the pecten. Automated software was used to determine the cone spacing and density at each location. Voronoi analysis was applied to determine the packing arrangement of the cones. Results In both chicks, cone photoreceptors were clearly visible at all retinal locations imaged. Cone densities measured at 36° nasal-12° superior retina from the pecten tip for chick A and 40° nasal-12° superior retina for chick B were 21,714±543 and 26,105±653 cones/mm2 respectively. For chick B, a further 11 locations immediately surrounding the pecten were imaged, with cone densities ranging from 20,980±524 to 25,148±629 cones/mm2. Conclusion In vivo analysis of the cone density and its packing characteristics are now possible in the chick eye through AO imaging, which has important implications for future studies of myopia and ocular disease research. PMID:21950701

  16. A dual-modal retinal imaging system with adaptive optics

    PubMed Central

    Meadway, Alexander; Girkin, Christopher A.; Zhang, Yuhua

    2013-01-01

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated. PMID:24514529

  17. Next generation high resolution adaptive optics fundus imager

    NASA Astrophysics Data System (ADS)

    Fournier, P.; Erry, G. R. G.; Otten, L. J.; Larichev, A.; Irochnikov, N.

    2005-12-01

    The spatial resolution of retinal images is limited by the presence of static and time-varying aberrations present within the eye. An updated High Resolution Adaptive Optics Fundus Imager (HRAOFI) has been built based on the development from the first prototype unit. This entirely new unit was designed and fabricated to increase opto-mechanical integration and ease-of-use through a new user interface. Improved camera systems for the Shack-Hartmann sensor and for the scene image were implemented to enhance the image quality and the frequency of the Adaptive Optics (AO) control loop. An optimized illumination system that uses specific wavelength bands was applied to increase the specificity of the images. Sample images of clinical trials of retinas, taken with and without the system, are shown. Data on the performance of this system will be presented, demonstrating the ability to calculate near diffraction-limited images.

  18. Spectrally Adaptable Compressive Sensing Imaging System

    DTIC Science & Technology

    2014-05-01

    2D coded projections. The underlying spectral 3D data cube is then recovered using compressed sensing (CS) reconstruction algorithms which assume...introduced in [?], is a remarkable imaging architecture that allows capturing spectral imaging information of a 3D cube with just a single 2D mea...allows capturing spectral imaging information of a 3D cube with just a single 2D measurement of the coded and spectrally dispersed source field

  19. Towards Adaptive High-Resolution Images Retrieval Schemes

    NASA Astrophysics Data System (ADS)

    Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.

    2016-10-01

    Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.

  20. Towards Adaptive High-Resolution Images Retrieval Schemes

    NASA Astrophysics Data System (ADS)

    Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.

    2016-06-01

    Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.

  1. Spatially adaptive regularized iterative high-resolution image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lim, Won Bae; Park, Min K.; Kang, Moon Gi

    2000-12-01

    High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The

  2. Body Image Distortion and Exposure to Extreme Body Types: Contingent Adaptation and Cross Adaptation for Self and Other.

    PubMed

    Brooks, Kevin R; Mond, Jonathan M; Stevenson, Richard J; Stephen, Ian D

    2016-01-01

    Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the "thin ideal" has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of "self" and "other" are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one's own body following exposure to "thin" and to "fat" others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size.

  3. Body Image Distortion and Exposure to Extreme Body Types: Contingent Adaptation and Cross Adaptation for Self and Other

    PubMed Central

    Brooks, Kevin R.; Mond, Jonathan M.; Stevenson, Richard J.; Stephen, Ian D.

    2016-01-01

    Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the “thin ideal” has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of “self” and “other” are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one's own body following exposure to “thin” and to “fat” others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size. PMID:27471447

  4. Block-based adaptive lifting schemes for multiband image compression

    NASA Astrophysics Data System (ADS)

    Masmoudi, Hela; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2004-02-01

    In this paper, we are interested in designing lifting schemes adapted to the statistics of the wavelet coefficients of multiband images for compression applications. More precisely, nonseparable vector lifting schemes are used in order to capture simultaneously the spatial and the spectral redundancies. The underlying operators are then computed in order to minimize the entropy of the resulting multiresolution representation. To this respect, we have developed a new iterative block-based classification algorithm. Simulation tests carried out on remotely sensed multispectral images indicate that a substantial gain in terms of bit-rate is achieved by the proposed adaptive coding method w.r.t the non-adaptive one.

  5. Linear algebra and image processing

    NASA Astrophysics Data System (ADS)

    Allali, Mohamed

    2010-09-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty.

  6. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  7. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  8. Information-Adaptive Image Encoding and Restoration

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well oil the test set.

  9. Photometric Calibration of the Gemini South Adaptive Optics Imager

    NASA Astrophysics Data System (ADS)

    Stevenson, Sarah Anne; Rodrigo Carrasco Damele, Eleazar; Thomas-Osip, Joanna

    2017-01-01

    The Gemini South Adaptive Optics Imager (GSAOI) is an instrument available on the Gemini South telescope at Cerro Pachon, Chile, utilizing the Gemini Multi-Conjugate Adaptive Optics System (GeMS). In order to allow users to easily perform photometry with this instrument and to monitor any changes in the instrument in the future, we seek to set up a process for performing photometric calibration with standard star observations taken across the time of the instrument’s operation. We construct a Python-based pipeline that includes IRAF wrappers for reduction and combines the AstroPy photutils package and original Python scripts with the IRAF apphot and photcal packages to carry out photometry and linear regression fitting. Using the pipeline, we examine standard star observations made with GSAOI on 68 nights between 2013 and 2015 in order to determine the nightly photometric zero points in the J, H, Kshort, and K bands. This work is based on observations obtained at the Gemini Observatory, processed using the Gemini IRAF and gemini_python packages, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil).

  10. The Urban Adaptation and Adaptation Process of Urban Migrant Children: A Qualitative Study

    ERIC Educational Resources Information Center

    Liu, Yang; Fang, Xiaoyi; Cai, Rong; Wu, Yang; Zhang, Yaofang

    2009-01-01

    This article employs qualitative research methods to explore the urban adaptation and adaptation processes of Chinese migrant children. Through twenty-one in-depth interviews with migrant children, the researchers discovered: The participant migrant children showed a fairly high level of adaptation to the city; their process of urban adaptation…

  11. Rapid adaptation in visual cortex to the structure of images.

    PubMed

    Müller, J R; Metha, A B; Krauskopf, J; Lennie, P

    1999-08-27

    Complex cells in striate cortex of macaque showed a rapid pattern-specific adaptation. Adaptation made cells more sensitive to orientation change near the adapting orientation. It reduced correlations among the responses of populations of cells, thereby increasing the information transmitted by each action potential. These changes were brought about by brief exposures to stationary patterns, on the time scale of a single fixation. Thus, if successive fixations expose neurons' receptive fields to images with similar but not identical structure, adaptation will remove correlations and improve discriminability.

  12. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging.

    PubMed

    Cua, Michelle; Wahl, Daniel J; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J; Jian, Yifan; Sarunic, Marinko V

    2016-09-07

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems.

  13. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging

    PubMed Central

    Cua, Michelle; Wahl, Daniel J.; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J.; Jian, Yifan; Sarunic, Marinko V.

    2016-01-01

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635

  14. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Wahl, Daniel J.; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J.; Jian, Yifan; Sarunic, Marinko V.

    2016-09-01

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems.

  15. Image denoising via adaptive eigenvectors of graph Laplacian

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li

    2016-07-01

    An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.

  16. The magic of image processing

    NASA Astrophysics Data System (ADS)

    Sulentic, J. W.

    1984-05-01

    Digital technology has been used to improve enhancement techniques in astronomical image processing. Continuous tone variations in photographs are assigned density number (DN) values which are arranged in an array. DN locations are processed by computer and turned into pixels which form a reconstruction of the original scene on a television monitor. Digitized data can be manipulated to enhance contrast and filter out gross patterns of light and dark which obscure small scale features. Separate black and white frames exposed at different wavelengths can be digitized and processed individually, then recombined to produce a final image in color. Several examples of the use of the technique are provided, including photographs of spiral galaxy M33; four galaxies in Coma Berenices (NGC 4169, 4173, 4174, and 4175); and Stephens Quintet.

  17. SPECKLE NOISE SUBTRACTION AND SUPPRESSION WITH ADAPTIVE OPTICS CORONAGRAPHIC IMAGING

    SciTech Connect

    Ren Deqing; Dou Jiangpei; Zhang Xi; Zhu Yongtian

    2012-07-10

    Future ground-based direct imaging of exoplanets depends critically on high-contrast coronagraph and wave-front manipulation. A coronagraph is designed to remove most of the unaberrated starlight. Because of the wave-front error, which is inherit from the atmospheric turbulence from ground observations, a coronagraph cannot deliver its theoretical performance, and speckle noise will limit the high-contrast imaging performance. Recently, extreme adaptive optics, which can deliver an extremely high Strehl ratio, is being developed for such a challenging mission. In this publication, we show that barely taking a long-exposure image does not provide much gain for coronagraphic imaging with adaptive optics. We further discuss a speckle subtraction and suppression technique that fully takes advantage of the high contrast provided by the coronagraph, as well as the wave front corrected by the adaptive optics. This technique works well for coronagraphic imaging with conventional adaptive optics with a moderate Strehl ratio, as well as for extreme adaptive optics with a high Strehl ratio. We show how to substrate and suppress speckle noise efficiently up to the third order, which is critical for future ground-based high-contrast imaging. Numerical simulations are conducted to fully demonstrate this technique.

  18. Fast-adaptive near-lossless image compression

    NASA Astrophysics Data System (ADS)

    He, Kejing

    2016-05-01

    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  19. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE)

    PubMed Central

    Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram

    2010-01-01

    MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794

  20. Multi-modal automatic montaging of adaptive optics retinal images

    PubMed Central

    Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.

    2016-01-01

    We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714

  1. Multi-modal automatic montaging of adaptive optics retinal images.

    PubMed

    Chen, Min; Cooper, Robert F; Han, Grace K; Gee, James; Brainard, David H; Morgan, Jessica I W

    2016-12-01

    We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download.

  2. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  3. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  4. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  5. A cost-effective line-based light-balancing technique using adaptive processing.

    PubMed

    Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min

    2006-09-01

    The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.

  6. Adaptive two-pass rank order filter to remove impulse noise in highly corrupted images.

    PubMed

    Xu, Xiaoyin; Miller, Eric L; Chen, Dongbin; Sarhadi, Mansoor

    2004-02-01

    In this paper, we present an adaptive two-pass rank order filter to remove impulse noise in highly corrupted images. When the noise ratio is high, rank order filters, such as the median filter for example, can produce unsatisfactory results. Better results can be obtained by applying the filter twice, which we call two-pass filtering. To further improve the performance, we develop an adaptive two-pass rank order filter. Between the passes of filtering, an adaptive process is used to detect irregularities in the spatial distribution of the estimated impulse noise. The adaptive process then selectively replaces some pixels changed by the first pass of filtering with their original observed pixel values. These pixels are then kept unchanged during the second filtering. In combination, the adaptive process and the second filter eliminate more impulse noise and restore some pixels that are mistakenly altered by the first filtering. As a final result, the reconstructed image maintains a higher degree of fidelity and has a smaller amount of noise. The idea of adaptive two-pass processing can be applied to many rank order filters, such as a center-weighted median filter (CWMF), adaptive CWMF, lower-upper-middle filter, and soft-decision rank-order-mean filter. Results from computer simulations are used to demonstrate the performance of this type of adaptation using a number of basic rank order filters.

  7. Distributed relaxation processes in sensory adaptation.

    PubMed

    Thorson, J; Biederman-Thorson, M

    1974-01-18

    Dynamic description of most receptors, even in their near-linear ranges, has not led to understanding of the underlying physical events-in many instances because their curious transfer functions are not found in the usual repertoire of integral-order control-system analysis. We have described some methods, borrowed from other fields, which allow one to map any linear frequency response onto a putative weighting over an ensemble of simpler relaxation processes. One can then ask whether the resultant weighting of such processes suggests a corresponding plausible distribution of values for an appropriate physical variable within the sensory transducer. To illustrate this approach, we have chosen the fractional-order low-frequency response of Limulus lateral-eye photoreceptors. We show first that the current "adapting-bump" hypothesis for the generator potential can be formulated in terms of local first-order relaxation processes in which local light flux, the cross section of rhodopsin for photon capture, and restoration rate of local conductance-changing capability play specific roles. A representative spatial distribution for one of these parameters, which just accounts for the low-frequency response of the receptor, is then derived and its relation to cellular properties and recent experiments is examined. Finally, we show that for such a system, nonintegral-order dynamics are equivalent to nonhyperbolic statics, and that the efficacy distribution derived to account for the small-signal dynamics in fact predicts several decades of near-logarithmic response in the steady state. Encouraged by the result that one plausible proposal can account approximately for both the low-frequency dynamics (the transfer function s(k)) and the range-compressing statics (the Weber-Fechner relationship) measured in this photoreceptor, we have described some formally similar applications of these distributed effects to the vertebrate retina and to analogous properties of

  8. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  9. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  10. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  11. Probing the functions of contextual modulation by adapting images rather than observers

    PubMed Central

    Webster, Michael A.

    2014-01-01

    Countless visual aftereffects have illustrated how visual sensitivity and perception can be biased by adaptation to the recent temporal context. This contextual modulation has been proposed to serve a variety of functions, but the actual benefits of adaptation remain uncertain. We describe an approach we have recently developed for exploring these benefits by adapting images instead of observers, to simulate how images should appear under theoretically optimal states of adaptation. This allows the long-term consequences of adaptation to be evaluated in ways that are difficult to probe by adapting observers, and provides a common framework for understanding how visual coding changes when the environment or the observer changes, or for evaluating how the effects of temporal context depend on different models of visual coding or the adaptation processes. The approach is illustrated for the specific case of adaptation to color, for which the initial neural coding and adaptation processes are relatively well understood, but can in principle be applied to examine the consequences of adaptation for any stimulus dimension. A simple calibration that adjusts each neuron’s sensitivity according to the stimulus level it is exposed to is sufficient to normalize visual coding and generate a host of benefits, from increased efficiency to perceptual constancy to enhanced discrimination. This temporal normalization may also provide an important precursor for the effective operation of contextual mechanisms operating across space or feature dimensions. To the extent that the effects of adaptation can be predicted, images from new environments could be “pre-adapted” to match them to the observer, eliminating the need for observers to adapt. PMID:25281412

  12. Image post-processing in dental practice.

    PubMed

    Gormez, Ozlem; Yilmaz, Hasan Huseyin

    2009-10-01

    Image post-processing of dental digital radiographs, a function which used commonly in dental practice is presented in this article. Digital radiography has been available in dentistry for more than 25 years and its use by dental practitioners is steadily increasing. Digital acquisition of radiographs enables computer-based image post-processing to enhance image quality and increase the accuracy of interpretation. Image post-processing applications can easily be practiced in dental office by a computer and image processing programs. In this article, image post-processing operations such as image restoration, image enhancement, image analysis, image synthesis, and image compression, and their diagnostic efficacy is described. In addition this article provides general dental practitioners with a broad overview of the benefits of the different image post-processing operations to help them understand the role of that the technology can play in their practices.

  13. Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging.

    PubMed

    Fu, J C; Chen, C C; Chai, J W; Wong, S T C; Li, I C

    2010-06-01

    We propose an automatic hybrid image segmentation model that integrates the statistical expectation maximization (EM) model and the spatial pulse coupled neural network (PCNN) for brain magnetic resonance imaging (MRI) segmentation. In addition, an adaptive mechanism is developed to fine tune the PCNN parameters. The EM model serves two functions: evaluation of the PCNN image segmentation and adaptive adjustment of the PCNN parameters for optimal segmentation. To evaluate the performance of the adaptive EM-PCNN, we use it to segment MR brain image into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). The performance of the adaptive EM-PCNN is compared with that of the non-adaptive EM-PCNN, EM, and Bias Corrected Fuzzy C-Means (BCFCM) algorithms. The result is four sets of boundaries for the GM and the brain parenchyma (GM+WM), the two regions of most interest in medical research and clinical applications. Each set of boundaries is compared with the golden standard to evaluate the segmentation performance. The adaptive EM-PCNN significantly outperforms the non-adaptive EM-PCNN, EM, and BCFCM algorithms in gray mater segmentation. In brain parenchyma segmentation, the adaptive EM-PCNN significantly outperforms the BCFCM only. However, the adaptive EM-PCNN is better than the non-adaptive EM-PCNN and EM on average. We conclude that of the three approaches, the adaptive EM-PCNN yields the best results for gray matter and brain parenchyma segmentation.

  14. Contrast-based sensorless adaptive optics for retinal imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  15. An improved adaptive IHS method for image fusion

    NASA Astrophysics Data System (ADS)

    Wang, Ting

    2015-12-01

    An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.

  16. Contrast-based sensorless adaptive optics for retinal imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T.O.; He, Zheng; Metha, Andrew

    2015-01-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525

  17. Adaptive Compression of Multisensor Image Data

    DTIC Science & Technology

    1992-03-01

    upsample and reconstruct the subimages which are then added together to form the reconstructed image. In order to prevent distortions resulting from...smooth surfaces such as metallic or painted objects have predominantly path A reflections and that rougher surfaces such as soils and vegetation support

  18. Spatially adaptive migration tomography for multistatic GPR imaging

    DOEpatents

    Paglieroni, David W; Beer, N. Reginald

    2013-08-13

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  19. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  20. Registration of adaptive optics corrected retinal nerve fiber layer (RNFL) images

    PubMed Central

    Ramaswamy, Gomathy; Lombardo, Marco; Devaney, Nicholas

    2014-01-01

    Glaucoma is the leading cause of preventable blindness in the western world. Investigation of high-resolution retinal nerve fiber layer (RNFL) images in patients may lead to new indicators of its onset. Adaptive optics (AO) can provide diffraction-limited images of the retina, providing new opportunities for earlier detection of neuroretinal pathologies. However, precise processing is required to correct for three effects in sequences of AO-assisted, flood-illumination images: uneven illumination, residual image motion and image rotation. This processing can be challenging for images of the RNFL due to their low contrast and lack of clearly noticeable features. Here we develop specific processing techniques and show that their application leads to improved image quality on the nerve fiber bundles. This in turn improves the reliability of measures of fiber texture such as the correlation of Gray-Level Co-occurrence Matrix (GLCM). PMID:24940551

  1. Registration of adaptive optics corrected retinal nerve fiber layer (RNFL) images.

    PubMed

    Ramaswamy, Gomathy; Lombardo, Marco; Devaney, Nicholas

    2014-06-01

    Glaucoma is the leading cause of preventable blindness in the western world. Investigation of high-resolution retinal nerve fiber layer (RNFL) images in patients may lead to new indicators of its onset. Adaptive optics (AO) can provide diffraction-limited images of the retina, providing new opportunities for earlier detection of neuroretinal pathologies. However, precise processing is required to correct for three effects in sequences of AO-assisted, flood-illumination images: uneven illumination, residual image motion and image rotation. This processing can be challenging for images of the RNFL due to their low contrast and lack of clearly noticeable features. Here we develop specific processing techniques and show that their application leads to improved image quality on the nerve fiber bundles. This in turn improves the reliability of measures of fiber texture such as the correlation of Gray-Level Co-occurrence Matrix (GLCM).

  2. The Atmosphere of Uranus as Imaged with Keck Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Hammel, H. B.; de Pater, I.; Gibbard, S. G.; Lockwood, G. W.; Rages, K.

    2004-12-01

    Adaptive optics imaging of Uranus was obtained with NIRC2 on the Keck II 10-meter telescope in October 2003 and July 2004 through J, H, and K' filters. Dozens of discrete features were detected in the atmosphere of Uranus. We report the first measurements of winds northward of +43 deg, the first direct measurement of equatorial winds, and the highest wind velocity seen yet on Uranus. At northern mid-latitudes, the winds may have accelerated when compared to earlier HST and Keck observations; southern wind speeds have not changed since Voyager measurements in 1986. The equator of Uranus exhibits a subtle wave structure, with diffuse patches roughly every 30 degs in longitude. There is no sign of a northern "polar collar" as is seen in the south, but a number of discrete features seen at the "expected" latitudes may signal its early stages of development. The largest cloud features on Uranus show complex structure extending over tens of degrees. On 4 July 2004, we detected a southern hemispheric cloud feature on Uranus at K', the first detection of a southern feature at or longward of 2 microns. H images showed an extended structure whose condensed core was co-located with the K'-bright feature. The core exhibited marked brightness variation, fading within just a few days. The initial brightness at K' indicates that the core's scattering particles reached altitudes above the 1-bar level, with the extended H feature residing below 1.1 bars. The core's rapid disappearance at K' indicates dynamical processes in the local vertical aerosol structure. HBH acknowledges support from NASA grants NAG5-11961 and NAG5-10451. IdP acknowledges support from NSF and the Technology Center for Adaptive Optics, managed by UCSC under cooperative agreement No. AST-9876783. SGG's work was performed under the auspices of the U.S. DoE National Nuclear Security Administration by the UC, LLNL under contract No. W-7405-Eng-48.

  3. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.

    PubMed

    Ganasala, Padma; Kumar, Vinod

    2016-02-01

    Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.

  4. Performance of the Gemini Planet Imager's adaptive optics system.

    PubMed

    Poyneer, Lisa A; Palmer, David W; Macintosh, Bruce; Savransky, Dmitry; Sadakuni, Naru; Thomas, Sandrine; Véran, Jean-Pierre; Follette, Katherine B; Greenbaum, Alexandra Z; Ammons, S Mark; Bailey, Vanessa P; Bauman, Brian; Cardwell, Andrew; Dillon, Daren; Gavel, Donald; Hartung, Markus; Hibon, Pascale; Perrin, Marshall D; Rantakyrö, Fredrik T; Sivaramakrishnan, Anand; Wang, Jason J

    2016-01-10

    The Gemini Planet Imager's adaptive optics (AO) subsystem was designed specifically to facilitate high-contrast imaging. A definitive description of the system's algorithms and technologies as built is given. 564 AO telemetry measurements from the Gemini Planet Imager Exoplanet Survey campaign are analyzed. The modal gain optimizer tracks changes in atmospheric conditions. Science observations show that image quality can be improved with the use of both the spatially filtered wavefront sensor and linear-quadratic-Gaussian control of vibration. The error budget indicates that for all targets and atmospheric conditions AO bandwidth error is the largest term.

  5. Adaptation of web pages and images for mobile applications

    NASA Astrophysics Data System (ADS)

    Kopf, Stephan; Guthier, Benjamin; Lemelson, Hendrik; Effelsberg, Wolfgang

    2009-02-01

    In this paper, we introduce our new visualization service which presents web pages and images on arbitrary devices with differing display resolutions. We analyze the layout of a web page and simplify its structure and formatting rules. The small screen of a mobile device is used much better this way. Our new image adaptation service combines several techniques. In a first step, border regions which do not contain relevant semantic content are identified. Cropping is used to remove these regions. Attention objects are identified in a second step. We use face detection, text detection and contrast based saliency maps to identify these objects and combine them into a region of interest. Optionally, the seam carving technique can be used to remove inner parts of an image. Additionally, we have developed a software tool to validate, add, delete, or modify all automatically extracted data. This tool also simulates different mobile devices, so that the user gets a feeling of how an adapted web page will look like. We have performed user studies to evaluate our web and image adaptation approach. Questions regarding software ergonomics, quality of the adapted content, and perceived benefit of the adaptation were asked.

  6. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  7. Adaptive dictionary learning in sparse gradient domain for image recovery.

    PubMed

    Liu, Qiegen; Wang, Shanshan; Ying, Leslie; Peng, Xi; Zhu, Yanjie; Liang, Dong

    2013-12-01

    Image recovery from undersampled data has always been challenging due to its implicit ill-posed nature but becomes fascinating with the emerging compressed sensing (CS) theory. This paper proposes a novel gradient based dictionary learning method for image recovery, which effectively integrates the popular total variation (TV) and dictionary learning technique into the same framework. Specifically, we first train dictionaries from the horizontal and vertical gradients of the image and then reconstruct the desired image using the sparse representations of both derivatives. The proposed method enables local features in the gradient images to be captured effectively, and can be viewed as an adaptive extension of the TV regularization. The results of various experiments on MR images consistently demonstrate that the proposed algorithm efficiently recovers images and presents advantages over the current leading CS reconstruction approaches.

  8. Adaptive, predictive controller for optimal process control

    SciTech Connect

    Brown, S.K.; Baum, C.C.; Bowling, P.S.; Buescher, K.L.; Hanagandi, V.M.; Hinde, R.F. Jr.; Jones, R.D.; Parkinson, W.J.

    1995-12-01

    One can derive a model for use in a Model Predictive Controller (MPC) from first principles or from experimental data. Until recently, both methods failed for all but the simplest processes. First principles are almost always incomplete and fitting to experimental data fails for dimensions greater than one as well as for non-linear cases. Several authors have suggested the use of a neural network to fit the experimental data to a multi-dimensional and/or non-linear model. Most networks, however, use simple sigmoid functions and backpropagation for fitting. Training of these networks generally requires large amounts of data and, consequently, very long training times. In 1993 we reported on the tuning and optimization of a negative ion source using a special neural network[2]. One of the properties of this network (CNLSnet), a modified radial basis function network, is that it is able to fit data with few basis functions. Another is that its training is linear resulting in guaranteed convergence and rapid training. We found the training to be rapid enough to support real-time control. This work has been extended to incorporate this network into an MPC using the model built by the network for predictive control. This controller has shown some remarkable capabilities in such non-linear applications as continuous stirred exothermic tank reactors and high-purity fractional distillation columns[3]. The controller is able not only to build an appropriate model from operating data but also to thin the network continuously so that the model adapts to changing plant conditions. The controller is discussed as well as its possible use in various of the difficult control problems that face this community.

  9. Adaptive conductance filtering for spatially varying noise in PET images

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk R.; Manjeshwar, Ravindra

    2006-03-01

    PET images that have been reconstructed with unregularized algorithms are commonly smoothed with linear Gaussian filters to control noise. Since these filters are spatially invariant, they degrade feature contrast in the image, compromising lesion detectability. Edge-preserving smoothing filters can differentially preserve edges and features while smoothing noise. These filters assume spatially uniform noise models. However, the noise in PET images is spatially variant, approximately following a Poisson behavior. Therefore, different regions of a PET image need smoothing by different amounts. In this work, we introduce an adaptive filter, based on anisotropic diffusion, designed specifically to overcome this problem. In this algorithm, the diffusion is varied according to a local estimate of the noise using either the local median or the grayscale image opening to weight the conductance parameter. The algorithm is thus tailored to the task of smoothing PET images, or any image with Poisson-like noise characteristics, by adapting itself to varying noise while preserving significant features in the image. This filter was compared with Gaussian smoothing and a representative anisotropic diffusion method using three quantitative task-relevant metrics calculated on simulated PET images with lesions in the lung and liver. The contrast gain and noise ratio metrics were used to measure the ability to do accurate quantitation; the Channelized Hotelling Observer lesion detectability index was used to quantify lesion detectability. The adaptive filter improved the signal-to-noise ratio by more than 45% and lesion detectability by more than 55% over the Gaussian filter while producing "natural" looking images and consistent image quality across different anatomical regions.

  10. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  11. Adaptive optics technology for high-resolution retinal imaging.

    PubMed

    Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

    2012-12-27

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging.

  12. Adaptive Optics Technology for High-Resolution Retinal Imaging

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

    2013-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging. PMID:23271600

  13. Fast HDR image upscaling using locally adapted linear filters

    NASA Astrophysics Data System (ADS)

    Talebi, Hossein; Su, Guan-Ming; Yin, Peng

    2015-02-01

    A new method for upscaling high dynamic range (HDR) images is introduced in this paper. Overshooting artifact is the common problem when using linear filters such as bicubic interpolation. This problem is visually more noticeable while working on HDR images where there exist more transitions from dark to bright. Our proposed method is capable of handling these artifacts by computing a simple gradient map which enables the filter to be locally adapted to the image content. This adaptation consists of first, clustering pixels into regions with similar edge structures and second, learning the shape and length of our symmetric linear filter for each of these pixel groups. This new filter can be implemented in a separable fashion which perfectly fits hardware implementations. Our experimental results show that training our filter with HDR images can effectively reduce the overshooting artifacts and improve upon the visual quality of the existing linear upscaling approaches.

  14. Adaptive colour transformation of retinal images for stroke prediction.

    PubMed

    Unnikrishnan, Premith; Aliahmad, Behzad; Kawasaki, Ryo; Kumar, Dinesh

    2013-01-01

    Identifying lesions in the retinal vasculature using Retinal imaging is most often done on the green channel. However, the effect of colour and single channel analysis on feature extraction has not yet been studied. In this paper an adaptive colour transformation has been investigated and validated on retinal images associated with 10-year stroke prediction, using principle component analysis (PCA). Histogram analysis indicated that while each colour channel image had a uni-modal distribution, the second component of the PCA had a bimodal distribution, and showed significantly improved separation between the retinal vasculature and the background. The experiments showed that using adaptive colour transformation, the sensitivity and specificity were both higher (AUC 0.73) compared with when single green channel was used (AUC 0.63) for the same database and image features.

  15. Locally adaptive bilateral clustering for universal image denoising

    NASA Astrophysics Data System (ADS)

    Toh, K. K. V.; Mat Isa, N. A.

    2012-12-01

    This paper presents a novel and efficient locally adaptive denoising method based on clustering of pixels into regions of similar geometric and radiometric structures. Clustering is performed by adaptively segmenting pixels in the local kernel based on their augmented variational series. Then, noise pixels are restored by selectively considering the radiometric and spatial properties of every pixel in the formed clusters. The proposed method is exceedingly robust in conveying reliable local structural information even in the presence of noise. As a result, the proposed method substantially outperforms other state-of-the-art methods in terms of image restoration and computational cost. We support our claims with ample simulated and real data experiments. The relatively fast runtime from extensive simulations also suggests that the proposed method is suitable for a variety of image-based products — either embedded in image capturing devices or applied as image enhancement software.

  16. Biomedical signal and image processing.

    PubMed

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro

    2011-01-01

    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  17. Image processing technique for arbitrary image positioning in holographic stereogram

    NASA Astrophysics Data System (ADS)

    Kang, Der-Kuan; Yamaguchi, Masahiro; Honda, Toshio; Ohyama, Nagaaki

    1990-12-01

    In a one-step holographic stereogram, if the series of original images are used just as they are taken from perspective views, three-dimensional images are usually reconstructed in back of the hologram plane. In order to enhance the sense of perspective of the reconstructed images and minimize blur of the interesting portions, we introduce an image processing technique for making a one-step flat format holographic stereogram in which three-dimensional images can be observed at an arbitrary specified position. Experimental results show the effect of the image processing. Further, we show results of a medical application using this image processing.

  18. A novel adaptive multi-focus image fusion algorithm based on PCNN and sharpness

    NASA Astrophysics Data System (ADS)

    Miao, Qiguang; Wang, Baoshu

    2005-05-01

    A novel adaptive multi-focus image fusion algorithm is given in this paper, which is based on the improved pulse coupled neural network(PCNN) model, the fundamental characteristics of the multi-focus image and the properties of visual imaging. Compared with the traditional algorithm where the linking strength, βij, of each neuron in the PCNN model is the same and its value is chosen through experimentation, this algorithm uses the clarity of each pixel of the image as its value, so that the linking strength of each pixel can be chosen adaptively. A fused image is produced by processing through the compare-select operator the objects of each firing mapping image taking part in image fusion, deciding in which image the clear parts is and choosing the clear parts in the image fusion process. By this algorithm, other parameters, for example, Δ, the threshold adjusting constant, only have a slight effect on the new fused image. It therefore overcomes the difficulty in adjusting parameters in the PCNN. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid method do in multi-focus image fusion.

  19. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  20. Framelet lifting in image processing

    NASA Astrophysics Data System (ADS)

    Lu, Da-Yong; Feng, Tie-Yong

    2010-08-01

    To obtain appropriate framelets in image processing, we often need to lift existing framelets. For this purpose the paper presents some methods which allow us to modify existing framelets or filters to construct new ones. The relationships of matrices and their eigenvalues which be used in lifting schemes show that the frame bounds of the lifted wavelet frames are optimal. Moreover, the examples given in Section 4 indicate that the lifted framelets can play the roles of some operators such as the weighted average operator, the Sobel operator and the Laplacian operator, which operators are often used in edge detection and motion estimation applications.

  1. Processing of medical images using Maple

    NASA Astrophysics Data System (ADS)

    Toro Betancur, V.

    2013-05-01

    Maple's Image Tools package was used to process medical images. The results showed clearer images and records of its intensities and entropy. The medical images of a rhinocerebral mucormycosis patient, who was not early diagnosed, were processed and analyzed using Maple's tools, which showed, in a clearer way, the affected parts in the perinasal cavities.

  2. Adaptive lifting scheme of wavelet transforms for image compression

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Wang, Guoyin; Nie, Neng

    2001-03-01

    Aiming at the demand of adaptive wavelet transforms via lifting, a three-stage lifting scheme (predict-update-adapt) is proposed according to common two-stage lifting scheme (predict-update) in this paper. The second stage is updating stage. The third is adaptive predicting stage. Our scheme is an update-then-predict scheme that can detect jumps in image from the updated data and it needs not any more additional information. The first stage is the key in our scheme. It is the interim of updating. Its coefficient can be adjusted to adapt to data to achieve a better result. In the adaptive predicting stage, we use symmetric prediction filters in the smooth area of image, while asymmetric prediction filters at the edge of jumps to reduce predicting errors. We design these filters using spatial method directly. The inherent relationships between the coefficients of the first stage and the other stages are found and presented by equations. Thus, the design result is a class of filters with coefficient that are no longer invariant. Simulation result of image coding with our scheme is good.

  3. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  4. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  5. Adaptive wavelet transform algorithm for lossy image compression

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Ramirez, Pablo M.; Acevedo Mosqueda, Marco Antonio

    2004-11-01

    A new algorithm of locally adaptive wavelet transform based on the modified lifting scheme is presented. It performs an adaptation of the wavelet high-pass filter at the prediction stage to the local image data activity. The proposed algorithm uses the generalized framework for the lifting scheme that permits to obtain easily different wavelet filter coefficients in the case of the (~N, N) lifting. Changing wavelet filter order and different control parameters, one can obtain the desired filter frequency response. It is proposed to perform the hard switching between different wavelet lifting filter outputs according to the local data activity estimate. The proposed adaptive transform possesses a good energy compaction. The designed algorithm was tested on different images. The obtained simulation results show that the visual and quantitative quality of the restored images is high. The distortions are less in the vicinity of high spatial activity details comparing to the non-adaptive transform, which introduces ringing artifacts. The designed algorithm can be used for lossy image compression and in the noise suppression applications.

  6. eXtreme Adaptive Optics Planet Imager: Overview and status

    SciTech Connect

    Macintosh, B A; Bauman, B; Evans, J W; Graham, J; Lockwood, C; Poyneer, L; Dillon, D; Gavel, D; Green, J; Lloyd, J; Makidon, R; Olivier, S; Palmer, D; Perrin, M; Severson, S; Sheinis, A; Sivaramakrishnan, A; Sommargren, G; Soumer, R; Troy, M; Wallace, K; Wishnow, E

    2004-08-18

    As adaptive optics (AO) matures, it becomes possible to envision AO systems oriented towards specific important scientific goals rather than general-purpose systems. One such goal for the next decade is the direct imaging detection of extrasolar planets. An 'extreme' adaptive optics (ExAO) system optimized for extrasolar planet detection will have very high actuator counts and rapid update rates - designed for observations of bright stars - and will require exquisite internal calibration at the nanometer level. In addition to extrasolar planet detection, such a system will be capable of characterizing dust disks around young or mature stars, outflows from evolved stars, and high Strehl ratio imaging even at visible wavelengths. The NSF Center for Adaptive Optics has carried out a detailed conceptual design study for such an instrument, dubbed the eXtreme Adaptive Optics Planet Imager or XAOPI. XAOPI is a 4096-actuator AO system, notionally for the Keck telescope, capable of achieving contrast ratios >10{sup 7} at angular separations of 0.2-1'. ExAO system performance analysis is quite different than conventional AO systems - the spatial and temporal frequency content of wavefront error sources is as critical as their magnitude. We present here an overview of the XAOPI project, and an error budget highlighting the key areas determining achievable contrast. The most challenging requirement is for residual static errors to be less than 2 nm over the controlled range of spatial frequencies. If this can be achieved, direct imaging of extrasolar planets will be feasible within this decade.

  7. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  8. The simulation of adaptive optical image even and pulse noise and research of image quality evaluation

    NASA Astrophysics Data System (ADS)

    Wen, Changli; Xu, Yuannan; Xu, Rong; Liu, Changhai; Men, Tao; Niu, Wei

    2013-09-01

    As optical image becomes more and more important in adaptive optics area, and adaptive optical telescopes play a more and more important role in the detection system on the ground, and the images we get are so many that we need find a suitable method to choose good quality images automatically in order to save human power, people pay more and more attention in image's evaluation methods and their characteristics. According to different image degradation model, the applicability of different image's quality evaluation method will be different. Researchers have paid most attention in how to improve or build new method to evaluate degraded images. Now we should change our way to take some research in the models of degradation of images, the reasons of image degradation, and the relations among different degraded images and different image quality evaluation methods. In this paper, we build models of even noise and pulse noise based on their definition and get degraded images using these models, and we take research in six kinds of usual image quality evaluation methods such as square error method, sum of multi-power of grey scale method, entropy method, Fisher function method, Sobel method, and sum of grads method, and we make computer software for these methods to use easily to evaluate all kinds of images input. Then we evaluate the images' qualities with different evaluation methods and analyze the results of six kinds of methods, and finally we get many important results. Such as the characteristics of every method for evaluating qualities of degraded images of even noise, the characteristics of every method for evaluating qualities of degraded images of pulse noise, and the best method to evaluate images which affected by tow kinds of noise both and the characteristics of this method. These results are important to image's choosing automatically, and this will help we to manage the images we get through adaptive optical telescopes base on the ground.

  9. A systematic process for adaptive concept exploration

    NASA Astrophysics Data System (ADS)

    Nixon, Janel Nicole

    several common challenges to the creation of quantitative modeling and simulation environments. Namely, a greater number of alternative solutions imply a greater number of design variables as well as larger ranges on those variables. This translates to a high-dimension combinatorial problem. As the size and dimensionality of the solution space gets larger, the number of physically impossible solutions within that space greatly increases. Thus, the ratio of feasible design space to infeasible space decreases, making it much harder to not only obtain a good quantitative sample of the space, but to also make sense of that data. This is especially the case in the early stages of design, where it is not practical to dedicate a great deal of resources to performing thorough, high-fidelity analyses on all the potential solutions. To make quantitative analyses feasible in these early stages of design, a method is needed that allows for a relatively sparse set of information to be collected quickly and efficiently, and yet, that information needs to be meaningful enough with which to base a decision. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The SPACE method uses a four-part sampling scheme to efficiently uncover the parametric relationships between the design variables and responses. Step 1 aims to identify the location of infeasible space within the region of interest using an initial

  10. Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System

    PubMed Central

    Hosseini, Monireh Sheikh; Zekri, Maryam

    2012-01-01

    Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054

  11. Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System.

    PubMed

    Hosseini, Monireh Sheikh; Zekri, Maryam

    2012-01-01

    Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated.

  12. Adaptive processing of fractions--evidence from eye-tracking.

    PubMed

    Huber, S; Moeller, K; Nuerk, H-C

    2014-05-01

    Recent evidence indicated that fraction pair type determined whether a particular fraction is processed holistically, componentially or in a hybrid manner. Going beyond previous studies, we investigated how participants adapt their processing of fractions not only to fraction type, but also to experimental context. To examine adaptation in fraction processing, we recorded participants' eye-fixation behaviour in a fraction magnitude comparison task. Participants' eye fixation behaviour indicated componential processing of fraction pairs with common components for which the decision-relevant components are easy to identify. Importantly, we observed that fraction processing was adapted to experimental context: Evidence for componential processing was stronger, when experimental context allowed valid expectations about which components are decision-relevant. Taken together, we conclude that fraction processing is adaptive beyond the comparison of different fraction types, because participants continuously adjust to the experimental context in which fractions are processed.

  13. Digital adaptive optics line-scanning confocal imaging system.

    PubMed

    Liu, Changgeng; Kim, Myung K

    2015-01-01

    A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea.

  14. Adaptive optics with pupil tracking for high resolution retinal imaging.

    PubMed

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-02-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.

  15. Adaptive optics with pupil tracking for high resolution retinal imaging

    PubMed Central

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-01-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577

  16. Adaptive directional lifting-based wavelet transform for image coding.

    PubMed

    Ding, Wenpeng; Wu, Feng; Wu, Xiaolin; Li, Shipeng; Li, Houqiang

    2007-02-01

    We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.

  17. Enhancing image quality in cleared tissue with adaptive optics

    NASA Astrophysics Data System (ADS)

    Reinig, Marc R.; Novak, Samuel W.; Tao, Xiaodong; Bentolila, Laurent A.; Roberts, Dustin G.; MacKenzie-Graham, Allan; Godshalk, Sirie E.; Raven, Mary A.; Knowles, David W.; Kubby, Joel

    2016-12-01

    Our ability to see fine detail at depth in tissues is limited by scattering and other refractive characteristics of the tissue. For fixed tissue, we can limit scattering with a variety of clearing protocols. This allows us to see deeper but not necessarily clearer. Refractive aberrations caused by the bulk index of refraction of the tissue and its variations continue to limit our ability to see fine detail. Refractive aberrations are made up of spherical and other Zernike modes, which can be significant at depth. Spherical aberration that is common across the imaging field can be corrected using an objective correcting collar, although this can require manual intervention. Other aberrations may vary across the imaging field and can only be effectively corrected using adaptive optics. Adaptive optics can also correct other aberrations simultaneously with the spherical aberration, eliminating manual intervention and speeding imaging. We use an adaptive optics two-photon microscope to examine the impact of the spherical and higher order aberrations on imaging and contrast the effect of compensating only for spherical aberration against compensating for the first 22 Zernike aberrations in two tissue types. Increase in image intensity by 1.6× and reduction of root mean square error by 3× are demonstrated.

  18. Digital adaptive optics line-scanning confocal imaging system

    PubMed Central

    Liu, Changgeng; Kim, Myung K.

    2015-01-01

    Abstract. A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea. PMID:26140334

  19. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance.

  20. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  1. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  2. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  3. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed.

  4. Objective assessment of image quality. IV. Application to adaptive optics

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed.

  5. Normalized iterative denoising ghost imaging based on the adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, Gaoliang; Yang, Zhaohua; Zhao, Yan; Yan, Ruitao; Liu, Xia; Liu, Baolei

    2017-02-01

    An approach for improving ghost imaging (GI) quality is proposed. In this paper, an iteration model based on normalized GI is built through theoretical analysis. An adaptive threshold value is selected in the iteration model. The initial value of the iteration model is estimated as a step to remove the correlated noise. The simulation and experimental results reveal that the proposed strategy reconstructs a better image than traditional and normalized GI, without adding complexity. The NIDGI-AT scheme does not require prior information regarding the object, and can also choose the threshold adaptively. More importantly, the signal-to-noise ratio (SNR) of the reconstructed image is greatly improved. Therefore, this methodology represents another step towards practical real-world applications.

  6. Neuronal Adaptive Mechanisms Underlying Intelligent Information Processing

    DTIC Science & Technology

    1982-05-01

    omuter Procram The program consists of three functional units: stimulus presentation and data collection, histogram generation and display, and benavioral...sequence for ten second trials of adaptation, conditioning, extinction, or delayed HS paradigms. Timing of stimuli can be generated • . .. -?’ _-, a’. Ah...are generated from the data and displayed four each on Mime 100 and VT105 video terminals. The histograms are averages of three trials and are

  7. Imaging of retinal vasculature using adaptive optics SLO/OCT

    PubMed Central

    Felberer, Franz; Rechenmacher, Matthias; Haindl, Richard; Baumann, Bernhard; Hitzenberger, Christoph K.; Pircher, Michael

    2015-01-01

    We use our previously developed adaptive optics (AO) scanning laser ophthalmoscope (SLO)/ optical coherence tomography (OCT) instrument to investigate its capability for imaging retinal vasculature. The system records SLO and OCT images simultaneously with a pixel to pixel correspondence which allows a direct comparison between those imaging modalities. Different field of views ranging from 0.8°x0.8° up to 4°x4° are supported by the instrument. In addition a dynamic focus scheme was developed for the AO-SLO/OCT system in order to maintain the high transverse resolution throughout imaging depth. The active axial eye tracking that is implemented in the OCT channel allows time resolved measurements of the retinal vasculature in the en-face imaging plane. Vessel walls and structures that we believe correspond to individual erythrocytes could be visualized with the system. PMID:25909024

  8. Imaging of retinal vasculature using adaptive optics SLO/OCT.

    PubMed

    Felberer, Franz; Rechenmacher, Matthias; Haindl, Richard; Baumann, Bernhard; Hitzenberger, Christoph K; Pircher, Michael

    2015-04-01

    We use our previously developed adaptive optics (AO) scanning laser ophthalmoscope (SLO)/ optical coherence tomography (OCT) instrument to investigate its capability for imaging retinal vasculature. The system records SLO and OCT images simultaneously with a pixel to pixel correspondence which allows a direct comparison between those imaging modalities. Different field of views ranging from 0.8°x0.8° up to 4°x4° are supported by the instrument. In addition a dynamic focus scheme was developed for the AO-SLO/OCT system in order to maintain the high transverse resolution throughout imaging depth. The active axial eye tracking that is implemented in the OCT channel allows time resolved measurements of the retinal vasculature in the en-face imaging plane. Vessel walls and structures that we believe correspond to individual erythrocytes could be visualized with the system.

  9. Technical Note: DIRART- A software suite for deformable image registration and adaptive radiotherapy research

    SciTech Connect

    Yang Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.

    2011-01-15

    Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods: DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research.

  10. Bayer patterned high dynamic range image reconstruction using adaptive weighting function

    NASA Astrophysics Data System (ADS)

    Kang, Hee; Lee, Suk Ho; Song, Ki Sun; Kang, Moon Gi

    2014-12-01

    It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.

  11. Natural language processing and visualization in the molecular imaging domain.

    PubMed

    Tulipano, P Karina; Tao, Ying; Millar, William S; Zanzonico, Pat; Kolbert, Katherine; Xu, Hua; Yu, Hong; Chen, Lifeng; Lussier, Yves A; Friedman, Carol

    2007-06-01

    Molecular imaging is at the crossroads of genomic sciences and medical imaging. Information within the molecular imaging literature could be used to link to genomic and imaging information resources and to organize and index images in a way that is potentially useful to researchers. A number of natural language processing (NLP) systems are available to automatically extract information from genomic literature. One existing NLP system, known as BioMedLEE, automatically extracts biological information consisting of biomolecular substances and phenotypic data. This paper focuses on the adaptation, evaluation, and application of BioMedLEE to the molecular imaging domain. In order to adapt BioMedLEE for this domain, we extend an existing molecular imaging terminology and incorporate it into BioMedLEE. BioMedLEE's performance is assessed with a formal evaluation study. The system's performance, measured as recall and precision, is 0.74 (95% CI: [.70-.76]) and 0.70 (95% CI [.63-.76]), respectively. We adapt a JAVA viewer known as PGviewer for the simultaneous visualization of images with NLP extracted information.

  12. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  13. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  14. Infrared image gray adaptive adjusting enhancement algorithm based on gray redundancy histogram-dealing technique

    NASA Astrophysics Data System (ADS)

    Hao, Zi-long; Liu, Yong; Chen, Ruo-wang

    2016-11-01

    In view of the histogram equalizing algorithm to enhance image in digital image processing, an Infrared Image Gray adaptive adjusting Enhancement Algorithm Based on Gray Redundancy Histogram-dealing Technique is proposed. The algorithm is based on the determination of the entire image gray value, enhanced or lowered the image's overall gray value by increasing appropriate gray points, and then use gray-level redundancy HE method to compress the gray-scale of the image. The algorithm can enhance image detail information. Through MATLAB simulation, this paper compares the algorithm with the histogram equalization method and the algorithm based on gray redundancy histogram-dealing technique , and verifies the effectiveness of the algorithm.

  15. Adaptive filtering for reduction of speckle in ultrasonic pulse-echo images.

    PubMed

    Bamber, J C; Daft, C

    1986-01-01

    Current medical ultrasonic scanning instrumentation permits the display of fine image detail (speckle) which does not transfer useful information but degrades the apparent low contrast resolution in the image. An adaptive two-dimensional filter has been developed which uses local features of image texture to recognize and maximally low-pass filter those parts of the image which correspond to fully developed speckle, while substantially preserving information associated with resolved-object structure. A first implementation of the filter is described which uses the ratio of the local variance and the local mean as the speckle recognition feature. Preliminary results of applying this form of display processing to medical ultrasound images are very encouraging; it appears that the visual perception of features such as small discrete structures, subtle fluctuations in mean echo level and changes in image texture may be enhanced relative to that for unprocessed images.

  16. Adapting smartphones for low-cost optical medical imaging

    NASA Astrophysics Data System (ADS)

    Pratavieira, Sebastião.; Vollet-Filho, José D.; Carbinatto, Fernanda M.; Blanco, Kate; Inada, Natalia M.; Bagnato, Vanderlei S.; Kurachi, Cristina

    2015-06-01

    Optical images have been used in several medical situations to improve diagnosis of lesions or to monitor treatments. However, most systems employ expensive scientific (CCD or CMOS) cameras and need computers to display and save the images, usually resulting in a high final cost for the system. Additionally, this sort of apparatus operation usually becomes more complex, requiring more and more specialized technical knowledge from the operator. Currently, the number of people using smartphone-like devices with built-in high quality cameras is increasing, which might allow using such devices as an efficient, lower cost, portable imaging system for medical applications. Thus, we aim to develop methods of adaptation of those devices to optical medical imaging techniques, such as fluorescence. Particularly, smartphones covers were adapted to connect a smartphone-like device to widefield fluorescence imaging systems. These systems were used to detect lesions in different tissues, such as cervix and mouth/throat mucosa, and to monitor ALA-induced protoporphyrin-IX formation for photodynamic treatment of Cervical Intraepithelial Neoplasia. This approach may contribute significantly to low-cost, portable and simple clinical optical imaging collection.

  17. Raster image adaptation for mobile devices using profiles

    NASA Astrophysics Data System (ADS)

    Rosenbaum, René; Hamann, Bernd

    2012-02-01

    Focusing on digital imagery, this paper introduces a strategy to handle heterogeneous hardware in mobile environments. Constrained system resources of most mobile viewing devices require contents that are tailored to the requirements of the user and the capabilities of the device. Appropriate image adaptation is still an unsolved research question. Due to the complexity of the problem, available solutions are either too resource-intensive or inflexible to be more generally applicable. The proposed approach is based on scalable image compression and progressive refinement as well as data and user profiles. A scalable image is created once and used multiple times for different kinds of devices and user requirements. Profiles available on the server side allow for an image representation that is adapted to the most important resources in mobile computing: screen space, computing power, and the volume of the transmitted data. Options for progressively refining content thereby allow for a fluent viewing experience during adaptation. Due to its flexibility and low complexity, the proposed solution is much more general compared to related approaches. To document the advantages of our approach we provide empirical results obtained in experiments with an implementation of the method.

  18. Assessment of vessel diameters for MR brain angiography processed images

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian-Dragos; Moldovanu, Simona

    2015-12-01

    The motivation was to develop an assessment method to measure (in)visible differences between the original and the processed images in MR brain angiography as a method of evaluation of the status of the vessel segments (i.e. the existence of the occlusion or intracerebral vessels damaged as aneurysms). Generally, the image quality is limited, so we improve the performance of the evaluation through digital image processing. The goal is to determine the best processing method that allows an accurate assessment of patients with cerebrovascular diseases. A total of 10 MR brain angiography images were processed by the following techniques: histogram equalization, Wiener filter, linear contrast adjustment, contrastlimited adaptive histogram equalization, bias correction and Marr-Hildreth filter. Each original image and their processed images were analyzed into the stacking procedure so that the same vessel and its corresponding diameter have been measured. Original and processed images were evaluated by measuring the vessel diameter (in pixels) on an established direction and for the precise anatomic location. The vessel diameter is calculated using the plugin ImageJ. Mean diameter measurements differ significantly across the same segment and for different processing techniques. The best results are provided by the Wiener filter and linear contrast adjustment methods and the worst by Marr-Hildreth filter.

  19. Eliminating "Hotspots" in Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  20. Halftoning and Image Processing Algorithms

    DTIC Science & Technology

    1999-02-01

    screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our

  1. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  2. Adaptive Constructive Processes and the Future of Memory

    ERIC Educational Resources Information Center

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…

  3. Adaptation to Work: An Exploration of Processes and Outcomes.

    ERIC Educational Resources Information Center

    Ashley, William L.; And Others

    A study of adaptation to work as both a process and an outcome was conducted. The study was conducted by personal interview that probed adaptation with respect to work's organizational, performance, interpersonal, responsibility, and affective aspects; and by questionnaire using the same aspects. The population studied consisted of persons without…

  4. Image segmentation on adaptive edge-preserving smoothing

    NASA Astrophysics Data System (ADS)

    He, Kun; Wang, Dan; Zheng, Xiuqing

    2016-09-01

    Nowadays, typical active contour models are widely applied in image segmentation. However, they perform badly on real images with inhomogeneous subregions. In order to overcome the drawback, this paper proposes an edge-preserving smoothing image segmentation algorithm. At first, this paper analyzes the edge-preserving smoothing conditions for image segmentation and constructs an edge-preserving smoothing model inspired by total variation. The proposed model has the ability to smooth inhomogeneous subregions and preserve edges. Then, a kind of clustering algorithm, which reasonably trades off edge-preserving and subregion-smoothing according to the local information, is employed to learn the edge-preserving parameter adaptively. At last, according to the confidence level of segmentation subregions, this paper constructs a smoothing convergence condition to avoid oversmoothing. Experiments indicate that the proposed algorithm has superior performance in precision, recall, and F-measure compared with other segmentation algorithms, and it is insensitive to noise and inhomogeneous-regions.

  5. Multiwavelength adaptive optical fundus camera and continuous retinal imaging

    NASA Astrophysics Data System (ADS)

    Yang, Han-sheng; Li, Min; Dai, Yun; Zhang, Yu-dong

    2009-08-01

    We have constructed a new version of retinal imaging system with chromatic aberration concerned and the correlated optical design presented in this article is based on the adaptive optics fundus camera modality. In our system, three typical wavelengths of 550nm, 650nm and 480nm were selected. Longitude chromatic aberration (LCA) was traded off to a minimum using ZEMAX program. The whole setup was actually evaluated on human subjects and retinal imaging was performed at continuous frame rates up to 20 Hz. Raw videos at parafovea locations were collected, and cone mosaics as well as retinal vasculature were clearly observed in one single clip. In addition, comparisons under different illumination conditions were also made to confirm our design. Image contrast and the Strehl ratio were effectively increased after dynamic correction of high order aberrations. This system is expected to bring new applications in functional imaging of human retina.

  6. Studies of an Adaptive Kaczmarz Method for Electrical Impedance Imaging

    NASA Astrophysics Data System (ADS)

    Li, Taoran; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.

    2013-04-01

    We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of memory storage in large scale problems, we propose to solve the inverse problem by adaptively updating both the optimal current pattern with improved distinguishability and the conductivity estimate at each iteration. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation and the Kaczmarz method can produce accurate and stable solutions adaptively compared to traditional Kaczmarz and Gauss-Newton type methods. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results.

  7. Image compression with QM-AYA adaptive binary arithmetic coder

    NASA Astrophysics Data System (ADS)

    Cheng, Joe-Ming; Langdon, Glen G., Jr.

    1993-01-01

    The Q-coder has been reported in the literature, and is a renorm-driven binary adaptive arithmetic coder. A similar renorm-driven coder, the QM coder, uses the same approach with an initial attack to more rapidly estimate the statistics in the beginning, and with a different state table. The QM coder is the adaptive binary arithmetic coder employed in the JBIG and JPEG image compression algorithms. The QM-AYA arithmetic coder is similar to the QM coder, with a different state table, that offers balanced improvements to the QM probability estimation for the less skewed distributions. The QM-AYA performs better when the probability estimate is near 0.5 for each binary symbol. An approach for constructing effective index change tables for Q-coder type adaptation is discussed.

  8. Shape-adaptable hyperlens for acoustic magnifying imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Hongkuan; Zhou, Xiaoming; Hu, Gengkai

    2016-11-01

    Previous prototypes of acoustic hyperlens consist of rigid channels, which are unable to adapt in shape to the object under detection. We propose to overcome this limitation by employing soft plastic tubes that could guide acoustics with robustness against bending deformation. Based on the idea of soft-tube acoustics, acoustic magnifying hyperlens with planar input and output surfaces has been fabricated and validated experimentally. The shape-adaption capability of the soft-tube hyperlens is demonstrated by a controlled experiment, in which the magnifying super-resolution images remain stable when the lens input surface is curved. Our study suggests a feasible route toward constructing the flexible channel-structured acoustic metamaterials with the shape-adaption capability, opening then an additional degree of freedom for full control of sound.

  9. Adaptive Noise Suppression Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Kozel, David; Nelson, Richard

    1996-01-01

    A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.

  10. A novel adaptive noise filtering method for SAR images

    NASA Astrophysics Data System (ADS)

    Li, Weibin; He, Mingyi

    2009-08-01

    In the most application situation, signal or image always is corrupted by additive noise. As a result there are mass methods to remove the additive noise while few approaches can work well for the multiplicative noise. The paper presents an improved MAP-based filter for multiplicative noise by adaptive window denoising technique. A Gamma noise models is discussed and a preprocessing technique to differential the matured and un-matured pixel is applied to get accurate estimate for Equivalent Number of Looks. Also the adaptive local window growth and 3 different denoise strategies are applied to smooth noise while keep its subtle information according to its local statistics feature. The simulation results show that the performance is better than existing filter. Several image experiments demonstrate its theoretical performance.

  11. Adaptive noise Wiener filter for scanning electron microscope imaging system.

    PubMed

    Sim, K S; Teh, V; Nia, M E

    2016-01-01

    Noise on scanning electron microscope (SEM) images is studied. Gaussian noise is the most common type of noise in SEM image. We developed a new noise reduction filter based on the Wiener filter. We compared the performance of this new filter namely adaptive noise Wiener (ANW) filter, with four common existing filters as well as average filter, median filter, Gaussian smoothing filter and the Wiener filter. Based on the experiments results the proposed new filter has better performance on different noise variance comparing to the other existing noise removal filters in the experiments.

  12. Adaptive sigmoid function bihistogram equalization for image contrast enhancement

    NASA Astrophysics Data System (ADS)

    Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe

    2015-09-01

    Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.

  13. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography.

    PubMed

    Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V

    2015-02-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.

  14. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography

    PubMed Central

    Wong, Kevin S. K.; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J.; Sarunic, Marinko V.

    2015-01-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation. PMID:25780747

  15. Adaptive zero-tree structure for curved wavelet image coding

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Wang, Demin; Vincent, André

    2006-02-01

    We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.

  16. Adaptive Memory: Is Survival Processing Special?

    ERIC Educational Resources Information Center

    Nairne, James S.; Pandeirada, Josefa N. S.

    2008-01-01

    Do the operating characteristics of memory continue to bear the imprints of ancestral selection pressures? Previous work in our laboratory has shown that human memory may be specially tuned to retain information processed in terms of its survival relevance. A few seconds of survival processing in an incidental learning context can produce recall…

  17. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  18. Integration of AdaptiSPECT, a small-animal adaptive SPECT imaging system

    PubMed Central

    Chaix, Cécile; Kovalsky, Stephen; Kosmider, Matthew; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    AdaptiSPECT is a pre-clinical adaptive SPECT imaging system under final development at the Center for Gamma-ray Imaging. The system incorporates multiple adaptive features: an adaptive aperture, 16 detectors mounted on translational stages, and the ability to switch between a non-multiplexed and a multiplexed imaging configuration. In this paper, we review the design of AdaptiSPECT and its adaptive features. We then describe the on-going integration of the imaging system. PMID:26347197

  19. Image processing for medical diagnosis using CNN

    NASA Astrophysics Data System (ADS)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images.

  20. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  1. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  2. Adaptive SPECT imaging with crossed-slit apertures

    PubMed Central

    Durko, Heather L.; Furenlid, Lars R.

    2015-01-01

    Preclinical single-photon emission computed tomography (SPECT) is an essential tool for studying the progression, response to treatment, and physiological changes in small animal models of human disease. The wide range of imaging applications is often limited by the static design of many preclinical SPECT systems. We have developed a prototype imaging system that replaces the standard static pinhole aperture with two sets of movable, keel-edged copper-tungsten blades configured as crossed (skewed) slits. These apertures can be positioned independently between the object and detector, producing a continuum of imaging configurations in which the axial and transaxial magnifications are not constrained to be equal. We incorporated a megapixel silicon double-sided strip detector to permit ultrahigh-resolution imaging. We describe the configuration of the adjustable slit aperture imaging system and discuss its application toward adaptive imaging, and reconstruction techniques using an accurate imaging forward model, a novel geometric calibration technique, and a GPU-based ultra-high-resolution reconstruction code. PMID:26190884

  3. An adaptive Gaussian model for satellite image deblurring.

    PubMed

    Jalobeanu, André; Blanc-Féraud, Laure; Zerubia, Josiane

    2004-04-01

    The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem, which can be regularized within a Bayesian context by using an a priori model of the reconstructed solution. Since real satellite data show spatially variant characteristics, we propose here to use an inhomogeneous model. We use the maximum likelihood estimator (MLE) to estimate its parameters and we show that the MLE computed on the corrupted image is not suitable for image deconvolution because it is not robust to noise. We then show that the estimation is correct only if it is made from the original image. Since this image is unknown, we need to compute an approximation of sufficiently good quality to provide useful estimation results. Such an approximation is provided by a wavelet-based deconvolution algorithm. Thus, a hybrid method is first used to estimate the space-variant parameters from this image and then to compute the regularized solution. The obtained results on high resolution satellite images simultaneously exhibit sharp edges, correctly restored textures, and a high SNR in homogeneous areas, since the proposed technique adapts to the local characteristics of the data.

  4. Multimodal Medical Image Fusion by Adaptive Manifold Filter.

    PubMed

    Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.

  5. Adaptive SPECT imaging with crossed-slit apertures

    NASA Astrophysics Data System (ADS)

    Durko, Heather L.; Furenlid, Lars R.

    2014-09-01

    Preclinical single-photon emission computed tomography (SPECT) is an essential tool for studying the pro-gression, response to treatment, and physiological changes in small animal models of human disease. The wide range of imaging applications is often limited by the static design of many preclinical SPECT systems. We have developed a prototype imaging system that replaces the standard static pinhole aperture with two sets of movable, keel-edged copper-tungsten blades configured as crossed (skewed) slits. These apertures can be positioned independently between the object and detector, producing a continuum of imaging configurations in which the axial and transaxial magnifications are not constrained to be equal. We incorporated a megapixel silicon double-sided strip detector to permit ultrahigh-resolution imaging. We describe the configuration of the adjustable slit aperture imaging system and discuss its application toward adaptive imaging, and reconstruction techniques using an accurate imaging forward model, a novel geometric calibration technique, and a GPU-based ultra-high-resolution reconstruction code.

  6. Adaptive compression of remote sensing stereo image pairs

    NASA Astrophysics Data System (ADS)

    Li, Yunsong; Yan, Ruomei; Wu, Chengke; Wang, Keyan; Li, Shizhong; Wang, Yu

    2010-09-01

    According to the data characteristics of remote sensing stereo image pairs, a novel adaptive compression algorithm based on the combination of feature-based image matching (FBM), area-based image matching (ABM), and region-based disparity estimation is proposed. First, the Scale Invariant Feature Transform (SIFT) and the Sobel operator are carried out for texture classification. Second, an improved ABM is used in the flat area, while the disparity estimation is used in the alpine area. The radiation compensation is applied to further improve the performance. Finally, the residual image and the reference image are compressed by JPEG2000 independently. The new algorithm provides a reasonable prediction in different areas according to the image textures, which improves the precision of the sensed image. The experimental results show that the PSNR of the proposed algorithm can obtain up to about 3dB's gain compared with the traditional algorithm at low or medium bitrates, and the DTM and subjective quality is also obviously enhanced.

  7. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  8. NASA Regional Planetary Image Facility image retrieval and processing system

    NASA Technical Reports Server (NTRS)

    Slavney, Susan

    1986-01-01

    The general design and analysis functions of the NASA Regional Planetary Image Facility (RPIF) image workstation prototype are described. The main functions of the MicroVAX II based workstation will be database searching, digital image retrieval, and image processing and display. The uses of the Transportable Applications Executive (TAE) in the system are described. File access and image processing programs use TAE tutor screens to receive parameters from the user and TAE subroutines are used to pass parameters to applications programs. Interface menus are also provided by TAE.

  9. Coordination in serial-parallel image processing

    NASA Astrophysics Data System (ADS)

    Wójcik, Waldemar; Dubovoi, Vladymyr M.; Duda, Marina E.; Romaniuk, Ryszard S.; Yesmakhanova, Laura; Kozbakova, Ainur

    2015-12-01

    Serial-parallel systems used to convert the image. The control of their work results with the need to solve coordination problem. The paper summarizes the model of coordination of resource allocation in relation to the task of synchronizing parallel processes; the genetic algorithm of coordination developed, its adequacy verified in relation to the process of parallel image processing.

  10. Sensory Processing Subtypes in Autism: Association with Adaptive Behavior

    ERIC Educational Resources Information Center

    Lane, Alison E.; Young, Robyn L.; Baker, Amy E. Z.; Angley, Manya T.

    2010-01-01

    Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes…

  11. Integrating digital topology in image-processing libraries.

    PubMed

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  12. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGES

    Collette, R.; King, J.; Keiser, Jr., D.; ...

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  13. Fission gas bubble identification using MATLAB's image processing toolbox

    SciTech Connect

    Collette, R.; King, J.; Keiser, Jr., D.; Miller, B.; Madden, J.; Schulthess, J.

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.

  14. A Software Package For Biomedical Image Processing And Analysis

    NASA Astrophysics Data System (ADS)

    Goncalves, Joao G. M.; Mealha, Oscar

    1988-06-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developped using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an efficient tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail.

  15. Semi-automated Image Processing for Preclinical Bioluminescent Imaging

    PubMed Central

    Slavine, Nikolai V; McColl, Roderick W

    2015-01-01

    Objective Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. Methods In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. Results We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. Conclusion The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment. PMID:26618187

  16. Image processing on the IBM personal computer

    NASA Technical Reports Server (NTRS)

    Myers, H. J.; Bernstein, R.

    1985-01-01

    An experimental, personal computer image processing system has been developed which provides a variety of processing functions in an environment that connects programs by means of a 'menu' for both casual and experienced users. The system is implemented by a compiled BASIC program that is coupled to assembly language subroutines. Image processing functions encompass subimage extraction, image coloring, area classification, histogramming, contrast enhancement, filtering, and pixel extraction.

  17. Common formalism for adaptive identification in signal processing and control

    NASA Astrophysics Data System (ADS)

    Macchi, O.

    1991-08-01

    The transversal and recursive approaches to adaptive identification are compared. ARMA modeling in signal processing, and identification in the indirect approach to control are developed in parallel. Adaptivity succeeds because the estimate is a linear function of the variable parameters for transversal identification. Control and signal processing can be imbedded in a unified well-established formalism that guarantees convergence of the adaptive parameters. For recursive identification, the estimate is a nonlinear function of the parameters, possibly resulting in nonuniqueness of the solution, in wandering and even instability of adaptive algorithms. The requirement for recursivity originates in the structure of the signal (MA-part) in signal processing. It is caused by the output measurement noise in control.

  18. Fast Source Camera Identification Using Content Adaptive Guided Image Filter.

    PubMed

    Zeng, Hui; Kang, Xiangui

    2016-03-01

    Source camera identification (SCI) is an important topic in image forensics. One of the most effective fingerprints for linking an image to its source camera is the sensor pattern noise, which is estimated as the difference between the content and its denoised version. It is widely believed that the performance of the sensor-based SCI heavily relies on the denoising filter used. This study proposes a novel sensor-based SCI method using content adaptive guided image filter (CAGIF). Thanks to the low complexity nature of the CAGIF, the proposed method is much faster than the state-of-the-art methods, which is a big advantage considering the potential real-time application of SCI. Despite the advantage of speed, experimental results also show that the proposed method can achieve comparable or better performance than the state-of-the-art methods in terms of accuracy.

  19. Adaptively wavelet-based image denoising algorithm with edge preserving

    NASA Astrophysics Data System (ADS)

    Tan, Yihua; Tian, Jinwen; Liu, Jian

    2006-02-01

    A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband. Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated edge pixel is also identified as noise. Unlike the thresholding method, after that we use local window filter in the wavelet domain to remove noise in which the variance estimation is elaborated to utilize the edge information. This method is adaptive to local image details, and can achieve better performance than the methods of state of the art.

  20. Adaptive Image Enhancement for Tracing 3D Morphologies of Neurons and Brain Vasculatures.

    PubMed

    Zhou, Zhi; Sorensen, Staci; Zeng, Hongkui; Hawrylycz, Michael; Peng, Hanchuan

    2015-04-01

    It is important to digitally reconstruct the 3D morphology of neurons and brain vasculatures. A number of previous methods have been proposed to automate the reconstruction process. However, in many cases, noise and low signal contrast with respect to the image background still hamper our ability to use automation methods directly. Here, we propose an adaptive image enhancement method specifically designed to improve the signal-to-noise ratio of several types of individual neurons and brain vasculature images. Our method is based on detecting the salient features of fibrous structures, e.g. the axon and dendrites combined with adaptive estimation of the optimal context windows where such saliency would be detected. We tested this method for a range of brain image datasets and imaging modalities, including bright-field, confocal and multiphoton fluorescent images of neurons, and magnetic resonance angiograms. Applying our adaptive enhancement to these datasets led to improved accuracy and speed in automated tracing of complicated morphology of neurons and vasculatures.

  1. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  2. A synoptic description of coal basins via image processing

    NASA Technical Reports Server (NTRS)

    Farrell, K. W., Jr.; Wherry, D. B.

    1978-01-01

    An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology.

  3. An adaptive-optics scanning laser ophthalmoscope for imaging murine retinal microstructure

    NASA Astrophysics Data System (ADS)

    Alt, Clemens; Biss, David P.; Tajouri, Nadja; Jakobs, Tatjana C.; Lin, Charles P.

    2010-02-01

    In vivo retinal imaging is an outstanding tool to observe biological processes unfold in real-time. The ability to image microstructure in vivo can greatly enhance our understanding of function in retinal microanatomy under normal conditions and in disease. Transgenic mice are frequently used for mouse models of retinal diseases. However, commercially available retinal imaging instruments lack the optical resolution and spectral flexibility necessary to visualize detail comprehensively. We developed an adaptive optics scanning laser ophthalmoscope (AO-SLO) specifically for mouse eyes. Our SLO is a sensor-less adaptive optics system (no Shack Hartmann sensor) that employs a stochastic parallel gradient descent algorithm to modulate a deformable mirror, ultimately aiming to correct wavefront aberrations by optimizing confocal image sharpness. The resulting resolution allows detailed observation of retinal microstructure. The AO-SLO can resolve retinal microglia and their moving processes, demonstrating that microglia processes are highly motile, constantly probing their immediate environment. Similarly, retinal ganglion cells are imaged along with their axons and sprouting dendrites. Retinal blood vessels are imaged both using evans blue fluorescence and backscattering contrast.

  4. Breast image feature learning with adaptive deconvolutional networks

    NASA Astrophysics Data System (ADS)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  5. Adaptive coded aperture imaging: progress and potential future applications

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.

    2011-09-01

    Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).

  6. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  7. Adaptive geodesic transform for segmentation of vertebrae on CT images

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  8. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  9. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  10. Combining advanced imaging processing and low cost remote imaging capabilities

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew J.; McQuiddy, Brian

    2008-04-01

    Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

  11. Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena

    2012-04-01

    Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an

  12. Image processing utilizing an APL interface

    NASA Astrophysics Data System (ADS)

    Zmola, Carl; Kapp, Oscar H.

    1991-03-01

    The past few years have seen the growing use of digital techniques in the analysis of electron microscope image data. This trend is driven by the need to maximize the information extracted from the electron micrograph by submitting its digital representation to the broad spectrum of analytical techniques made available by the digital computer. We are developing an image processing system for the analysis of digital images obtained with a scanning transmission electron microscope (STEM) and a scanning electron microscope (SEM). This system, run on an IBM PS/2 model 70/A21, uses menu-based image processing and an interactive APL interface which permits the direct manipulation of image data.

  13. Image Processing: A State-of-the-Art Way to Learn Science.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    Teachers participating in the Image Processing for Teaching Process, begun at the University of Arizona's Lunar and Planetary Laboratory in 1989, find this technology ideal for encouraging student discovery, promoting constructivist science or math experiences, and adapting in classrooms. Because image processing is not a computerized text, it…

  14. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  15. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  16. Hybrid regularizers-based adaptive anisotropic diffusion for image denoising.

    PubMed

    Liu, Kui; Tan, Jieqing; Ai, Liefu

    2016-01-01

    To eliminate the staircasing effect for total variation filter and synchronously avoid the edges blurring for fourth-order PDE filter, a hybrid regularizers-based adaptive anisotropic diffusion is proposed for image denoising. In the proposed model, the [Formula: see text]-norm is considered as the fidelity term and the regularization term is composed of a total variation regularization and a fourth-order filter. The two filters can be adaptively selected according to the diffusion function. When the pixels locate at the edges, the total variation filter is selected to filter the image, which can preserve the edges. When the pixels belong to the flat regions, the fourth-order filter is adopted to smooth the image, which can eliminate the staircase artifacts. In addition, the split Bregman and relaxation approach are employed in our numerical algorithm to speed up the computation. Experimental results demonstrate that our proposed model outperforms the state-of-the-art models cited in the paper in both the qualitative and quantitative evaluations.

  17. Adaptive clutter rejection for ultrasound color Doppler imaging

    NASA Astrophysics Data System (ADS)

    Yoo, Yang Mo; Managuli, Ravi; Kim, Yongmin

    2005-04-01

    We have developed a new adaptive clutter rejection technique where an optimum clutter filter is dynamically selected according to the varying clutter characteristics in ultrasound color Doppler imaging. The selection criteria have been established based on the underlying clutter characteristics (i.e., the maximum instantaneous clutter velocity and the clutter power) and the properties of various candidate clutter filters (e.g., projection-initialized infinite impulse response and polynomial regression). We obtained an average improvement of 3.97 dB and 3.27 dB in flow signal-to-clutter-ratio (SCR) compared to the conventional and down-mixing methods, respectively. These preliminary results indicate that the proposed adaptive clutter rejection method could improve the sensitivity and accuracy in flow velocity estimation for ultrasound color Doppler imaging. For a 192 x 256 color Doppler image with an ensemble size of 10, the proposed method takes only 57.2 ms, which is less than the acquisition time. Thus, the proposed method could be implemented in modern ultrasound systems, while providing improved clutter rejection and more accurate velocity estimation in real time.

  18. Adaptive Kaczmarz Method for Image Reconstruction in Electrical Impedance Tomography

    PubMed Central

    Li, Taoran; Kao, Tzu-Jen; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.

    2013-01-01

    We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of computation cost and memory in large scale problems, we propose solving the inverse problem by applying the optimal current patterns for distinguishing the actual conductivity from the conductivity estimate between each iteration of the block Kaczmarz algorithm. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation with the Kaczmarz method can produce more accurate and stable solutions adaptively as compared to traditional Kaczmarz and Gauss-Newton type methods. Choices of initial current pattern estimates are discussed in the paper. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results. PMID:23718952

  19. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  20. Real-time video image processing

    NASA Astrophysics Data System (ADS)

    Smedley, Kirk G.; Yool, Stephen R.

    1990-11-01

    Lockheed has designed and implemented a prototype real-time Video Enhancement Workbench (VEW) using commercial offtheshelf hardware and custom software. The hardware components include a Sun workstation Aspex PIPE image processor time base corrector VCR video camera and realtime disk subsystem. A cornprehensive set of image processing functions can be invoked by the analyst at any time during processing enabling interactive enhancement and exploitation of video sequences. Processed images can be transmitted and stored within the system in digital or video form. VEW also provides image output to a laser printer and to Interleaf technical publishing software.

  1. How Digital Image Processing Became Really Easy

    NASA Astrophysics Data System (ADS)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  2. Motion correction of magnetic resonance imaging data by using adaptive moving least squares method.

    PubMed

    Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Park, Hae-Jeong; Yoon, Jungho

    2015-06-01

    Image artifacts caused by subject motion during the imaging sequence are one of the most common problems in magnetic resonance imaging (MRI) and often degrade the image quality. In this study, we develop a motion correction algorithm for the interleaved-MR acquisition. An advantage of the proposed method is that it does not require either additional equipment or redundant over-sampling. The general framework of this study is similar to that of Rohlfing et al. [1], except for the introduction of the following fundamental modification. The three-dimensional (3-D) scattered data approximation method is used to correct the artifacted data as a post-processing step. In order to obtain a better match to the local structures of the given image, we use the data-adapted moving least squares (MLS) method that can improve the performance of the classical method. Numerical results are provided to demonstrate the advantages of the proposed algorithm.

  3. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  4. Feasibility studies of optical processing of image bandwidth compression schemes

    NASA Astrophysics Data System (ADS)

    Hunt, B. R.

    1987-05-01

    The two research activities are included as two separate divisions of this research report. The research activities are as follows: 1. Adaptive Recursive Interpolated DPCM for image data compression (ARIDPCM). A consistent theme in the search supported under Grant Number AFOSR under Grant AFOSR-81-0170 has been novel methods of image data compression that are suitable for implementation by optical processing. Initial investigation led to the IDPCM method of image data compression. 2. Deblurring images through turbulent atmosphere. A common problem in astronomy is the imaging of astronomical fluctuations of the atmosphere. The microscale fluctuations limit the resolution of any object by ground-based telescope, the phenomenon of stars twinkling being the most commonly observed form of this degradation. This problem also has military significance in limiting the ground-based observation of satellites in earth orbit. As concerns about SDI arise, the observation of Soviet Satellites becomes more important, and this observation is limited by atmospheric turbulence.

  5. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  6. Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen

    2017-02-01

    Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.

  7. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  8. Digital image processing in cephalometric analysis.

    PubMed

    Jäger, A; Döler, W; Schormann, T

    1989-01-01

    Digital image processing methods were applied to improve the practicability of cephalometric analysis. The individual X-ray film was digitized by the aid of a high resolution microscope-photometer. Digital processing was done using a VAX 8600 computer system. An improvement of the image quality was achieved by means of various digital enhancement and filtering techniques.

  9. True-Time-Delay Adaptive Array Processing Using Photorefractive Crystals

    NASA Astrophysics Data System (ADS)

    Kriehn, G. R.; Wagner, K.

    Radio frequency (RF) signal processing has proven to be a fertile application area when using photorefractive-based, optical processing techniques. This is due to a photorefractive material's capability to record gratings and diffract off these gratings with optically modulated beams that contain a wide RF bandwidth, and include applications such as the bias-free time-integrating correlator [1], adaptive signal processing, and jammer excision, [2, 3, 4]. Photorefractive processing of signals from RF antenna arrays is especially appropriate because of the massive parallelism that is readily achievable in a photorefractive crystal (in which many resolvable beams can be incident on a single crystal simultaneously—each coming from an optical modulator driven by a separate RF antenna element), and because a number of approaches for adaptive array processing using photorefractive crystals have been successfully investigated [5, 6]. In these types of applications, the adaptive weight coefficients are represented by the amplitude and phase of the holographic gratings, and many millions of such adaptive weights can be multiplexed within the volume of a photorefractive crystal. RF modulated optical signals from each array element are diffracted from the adaptively recorded photorefractive gratings (which can be multiplexed either angularly or spatially), and are then coherently combined with the appropriate amplitude weights and phase shifts to effectively steer the angular receptivity pattern of the antenna array toward the desired arriving signal. Likewise, the antenna nulls can also be rotated toward unwanted narrowband jammers for extinction, thereby optimizing the signal-to-interference-plus-noise ratio.

  10. Low-Rank Decomposition Based Restoration of Compressed Images via Adaptive Noise Estimation.

    PubMed

    Zhang, Xinfeng; Lin, Weisi; Xiong, Ruiqin; Liu, Xianming; Ma, Siwei; Gao, Wen

    2016-07-07

    Images coded at low bit rates in real-world applications usually suffer from significant compression noise, which significantly degrades the visual quality. Traditional denoising methods are not suitable for the content-dependent compression noise, which usually assume that noise is independent and with identical distribution. In this paper, we propose a unified framework of content-adaptive estimation and reduction for compression noise via low-rank decomposition of similar image patches. We first formulate the framework of compression noise reduction based upon low-rank decomposition. Compression noises are removed by soft-thresholding the singular values in singular value decomposition (SVD) of every group of similar image patches. For each group of similar patches, the thresholds are adaptively determined according to compression noise levels and singular values. We analyze the relationship of image statistical characteristics in spatial and transform domains, and estimate compression noise level for every group of similar patches from statistics in both domains jointly with quantization steps. Finally, quantization constraint is applied to estimated images to avoid over-smoothing. Extensive experimental results show that the proposed method not only improves the quality of compressed images obviously for post-processing, but are also helpful for computer vision tasks as a pre-processing method.

  11. The new image segmentation algorithm using adaptive evolutionary programming and fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Liu, Fang

    2011-06-01

    Image segmentation remains one of the major challenges in image analysis and computer vision. Fuzzy clustering, as a soft segmentation method, has been widely studied and successfully applied in mage clustering and segmentation. The fuzzy c-means (FCM) algorithm is the most popular method used in mage segmentation. However, most clustering algorithms such as the k-means and the FCM clustering algorithms search for the final clusters values based on the predetermined initial centers. The FCM clustering algorithms does not consider the space information of pixels and is sensitive to noise. In the paper, presents a new fuzzy c-means (FCM) algorithm with adaptive evolutionary programming that provides image clustering. The features of this algorithm are: 1) firstly, it need not predetermined initial centers. Evolutionary programming will help FCM search for better center and escape bad centers at local minima. Secondly, the spatial distance and the Euclidean distance is also considered in the FCM clustering. So this algorithm is more robust to the noises. Thirdly, the adaptive evolutionary programming is proposed. The mutation rule is adaptively changed with learning the useful knowledge in the evolving process. Experiment results shows that the new image segmentation algorithm is effective. It is providing robustness to noisy images.

  12. Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging

    NASA Astrophysics Data System (ADS)

    Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong

    2015-02-01

    Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.

  13. Development of an adaptive bilateral filter for evaluating color image difference

    NASA Astrophysics Data System (ADS)

    Wang, Zhaohui; Hardeberg, Jon Yngve

    2012-04-01

    Spatial filtering, which aims to mimic the contrast sensitivity function (CSF) of the human visual system (HVS), has previously been combined with color difference formulae for measuring color image reproduction errors. These spatial filters attenuate imperceptible information in images, unfortunately including high frequency edges, which are believed to be crucial in the process of scene analysis by the HVS. The adaptive bilateral filter represents a novel approach, which avoids the undesirable loss of edge information introduced by CSF-based filtering. The bilateral filter employs two Gaussian smoothing filters in different domains, i.e., spatial domain and intensity domain. We propose a method to decide the parameters, which are designed to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an image. Experiments and discussions are given to support the proposal. A series of perceptual experiments were conducted to evaluate the performance of our approach. The experimental sample images were reproduced with variations in six image attributes: lightness, chroma, hue, compression, noise, and sharpness/blurriness. The Pearson's correlation values between the model-predicted image difference and the observed difference were employed to evaluate the performance, and compare it with that of spatial CIELAB and image appearance model.

  14. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising.

    PubMed

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods.

  15. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  16. Adaptive stereo medical image watermarking using non-corresponding blocks.

    PubMed

    Mohaghegh, H; Karimi, N; Soroushmehr, S M R; Samavi, S; Najarian, K

    2015-01-01

    Today with the advent of technology in different medical imaging fields, the use of stereoscopic images has increased. Furthermore, with the rapid growth in telemedicine for remote diagnosis, treatment, and surgery, there is a need for watermarking. This is for copyright protection and tracking of digital media. Also, the efficient use of bandwidth for transmission of such data is another concern. In this paper an adaptive watermarking scheme is proposed that considers human visual system in depth perception. Our proposed scheme modifies maximum singular values of wavelet coefficients of stereo pair for embedding watermark bits. Experimental results show high 3D visual quality of watermarked video frames. Moreover, comparison with a compatible state of the art method shows that the proposed method is highly robust against attacks such as AWGN, salt and pepper noise, and JPEG compression.

  17. Fourier transform digital holographic adaptive optics imaging system

    PubMed Central

    Liu, Changgeng; Yu, Xiao; Kim, Myung K.

    2013-01-01

    A Fourier transform digital holographic adaptive optics imaging system and its basic principles are proposed. The CCD is put at the exact Fourier transform plane of the pupil of the eye lens. The spherical curvature introduced by the optics except the eye lens itself is eliminated. The CCD is also at image plane of the target. The point-spread function of the system is directly recorded, making it easier to determine the correct guide-star hologram. Also, the light signal will be stronger at the CCD, especially for phase-aberration sensing. Numerical propagation is avoided. The sensor aperture has nothing to do with the resolution and the possibility of using low coherence or incoherent illumination is opened. The system becomes more efficient and flexible. Although it is intended for ophthalmic use, it also shows potential application in microscopy. The robustness and feasibility of this compact system are demonstrated by simulations and experiments using scattering objects. PMID:23262541

  18. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection.

  19. Strategy for adaptive process control for a column flotation unit

    SciTech Connect

    Karr, C.L.; Ferguson, C.R.

    1994-12-31

    Researchers at the U.S. Bureau of Mines (USBM) have developed adaptive process control systems in which genetic algorithms (GAs) are used to augment fuzzy logic controllers (FLCs). Together, GAs and FLCs possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. In this paper, the details of an ongoing research effort to develop and implement an adaptive process control system for a column flotation unit are discussed. Column flotation units are used extensively in the mineral processing industry to recover valuable minerals from their ores.

  20. Development of digital processing method of microfocus X-ray images

    NASA Astrophysics Data System (ADS)

    Staroverov, N. E.; Kholopova, E. D.; Gryaznov, A. Yu; Zhamova, K. K.

    2017-02-01

    The article describes the basic methods of X-ray images digital processing. Also in the article is proposed method for background image aligning based on modeling of distorting function and subtracting it from the image. As a result is proposed the improved algorithm for locally adaptive median filtering for which has been carried out the effectiveness experimental verification.

  1. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  2. Image processing of digital chest ionograms.

    PubMed

    Yarwood, J R; Moores, B M

    1988-10-01

    A number of image-processing techniques have been applied to a digital ionographic chest image in order to evaluate their possible effects on this type of image. In order to quantify any effect, a simulated lesion was superimposed on the image at a variety of locations representing different types of structural detail. Visualization of these lesions was evaluated by a number of observers both pre- and post-processing operations. The operations employed included grey-scale transformations, histogram operations, edge-enhancement and smoothing functions. The resulting effects of these operations on the visualization of the simulated lesions are discussed.

  3. Fast unsupervised Bayesian image segmentation with adaptive spatial regularisation.

    PubMed

    Pereyra, Marcelo; McLaughlin, Stephen

    2017-03-15

    This paper presents a new Bayesian estimation technique for hidden Potts-Markov random fields with unknown regularisation parameters, with application to fast unsupervised K-class image segmentation. The technique is derived by first removing the regularisation parameter from the Bayesian model by marginalisation, followed by a small-variance-asymptotic (SVA) analysis in which the spatial regularisation and the integer-constrained terms of the Potts model are decoupled. The evaluation of this SVA Bayesian estimator is then relaxed into a problem that can be computed efficiently by iteratively solving a convex total-variation denoising problem and a least-squares clustering (K-means) problem, both of which can be solved straightforwardly, even in high-dimensions, and with parallel computing techniques. This leads to a fast fully unsupervised Bayesian image segmentation methodology in which the strength of the spatial regularisation is adapted automatically to the observed image during the inference procedure, and that can be easily applied in large 2D and 3D scenarios or in applications requiring low computing times. Experimental results on synthetic and real images, as well as extensive comparisons with state-ofthe- art algorithms, confirm that the proposed methodology offer extremely fast convergence and produces accurate segmentation results, with the important additional advantage of self-adjusting regularisation parameters.

  4. Sensorless adaptive optics system based on image second moment measurements

    NASA Astrophysics Data System (ADS)

    Agbana, Temitope E.; Yang, Huizhen; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel

    2016-04-01

    This paper presents experimental results of a static aberration control algorithm based on the linear relation be- tween mean square of the aberration gradient and the second moment of point spread function for the generation of control signal input for a deformable mirror (DM). Results presented in the work of Yang et al.1 suggested a good feasibility of the method for correction of static aberration for point and extended sources. However, a practical realisation of the algorithm has not been demonstrated. The goal of this article is to check the method experimentally in the real conditions of the present noise, finite dynamic range of the imaging camera, and system misalignments. The experiments have shown strong dependence of the linearity of the relationship on image noise and overall image intensity, which depends on the aberration level. Also, the restoration capability and the rate of convergence of the AO system for aberrations generated by the deformable mirror are experi- mentally investigated. The presented approach as well as the experimental results finds practical application in compensation of static aberration in adaptive microscopic imaging system.

  5. Extreme learning machine and adaptive sparse representation for image classification.

    PubMed

    Cao, Jiuwen; Zhang, Kai; Luo, Minxia; Yin, Chun; Lai, Xiaoping

    2016-09-01

    Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. Those two methods, however, have their respective drawbacks, e.g., in general, ELM is known to be less robust to noise while SRC is known to be time-consuming. Consequently, ELM and SRC complement each other in computational complexity and classification accuracy. In order to unify such mutual complementarity and thus further enhance the classification performance, we propose an efficient hybrid classifier to exploit the advantages of ELM and SRC in this paper. More precisely, the proposed classifier consists of two stages: first, an ELM network is trained by supervised learning. Second, a discriminative criterion about the reliability of the obtained ELM output is adopted to decide whether the query image can be correctly classified or not. If the output is reliable, the classification will be performed by ELM; otherwise the query image will be fed to SRC. Meanwhile, in the stage of SRC, a sub-dictionary that is adaptive to the query image instead of the entire dictionary is extracted via the ELM output. The computational burden of SRC thus can be reduced. Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency.

  6. Adaptive optics scanning laser ophthalmoscope imaging: technology update

    PubMed Central

    Merino, David; Loza-Alvarez, Pablo

    2016-01-01

    Adaptive optics (AO) retinal imaging has become very popular in the past few years, especially within the ophthalmic research community. Several different retinal techniques, such as fundus imaging cameras or optical coherence tomography systems, have been coupled with AO in order to produce impressive images showing individual cell mosaics over different layers of the in vivo human retina. The combination of AO with scanning laser ophthalmoscopy has been extensively used to generate impressive images of the human retina with unprecedented resolution, showing individual photoreceptor cells, retinal pigment epithelium cells, as well as microscopic capillary vessels, or the nerve fiber layer. Over the past few years, the technique has evolved to develop several different applications not only in the clinic but also in different animal models, thanks to technological developments in the field. These developments have specific applications to different fields of investigation, which are not limited to the study of retinal diseases but also to the understanding of the retinal function and vision science. This review is an attempt to summarize these developments in an understandable and brief manner in order to guide the reader into the possibilities that AO scanning laser ophthalmoscopy offers, as well as its limitations, which should be taken into account when planning on using it. PMID:27175057

  7. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  8. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  9. Extended adaptive filtering for wide-angle SAR image formation

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Roberts, William; Li, Jian

    2005-05-01

    For two-dimensional (2-D) spectral analysis, the adaptive filtering based technologies, such as CAPON and APES (Amplitude and Phase EStimation), are developed under the implicit assumption that the data sets are rectangular. However, in real SAR applications, especially for the wide-angle cases, the collected data sets are always non-rectangular. This raises the problem of how to extend the original adaptive filtering based algorithms for such kind of scenarios. In this paper, we propose an extended adaptive filtering (EAF) approach, which includes Extended APES (E-APES) and Extended CAPON (E-CAPON), for arbitrarily shaped 2-D data. The EAF algorithms adopt a missing-data approach where the unavailable data samples close to the collected data set are assumed missing. Using a group of filter-banks with varying sizes, these algorithms are non-iterative and do not require the estimation of the unavailable samples. The improved imaging results of the proposed algorithms are demonstrated by applying them to two different SAR data sets.

  10. Adaptation of commercial microscopes for advanced imaging applications

    NASA Astrophysics Data System (ADS)

    Brideau, Craig; Poon, Kelvin; Stys, Peter

    2015-03-01

    Today's commercially available microscopes offer a wide array of options to accommodate common imaging experiments. Occasionally, an experimental goal will require an unusual light source, filter, or even irregular sample that is not compatible with existing equipment. In these situations the ability to modify an existing microscopy platform with custom accessories can greatly extend its utility and allow for experiments not possible with stock equipment. Light source conditioning/manipulation such as polarization, beam diameter or even custom source filtering can easily be added with bulk components. Custom and after-market detectors can be added to external ports using optical construction hardware and adapters. This paper will present various examples of modifications carried out on commercial microscopes to address both atypical imaging modalities and research needs. Violet and near-ultraviolet source adaptation, custom detection filtering, and laser beam conditioning and control modifications will be demonstrated. The availability of basic `building block' parts will be discussed with respect to user safety, construction strategies, and ease of use.

  11. Adaptive Optics Imaging Survey of Luminous Infrared Galaxies

    SciTech Connect

    Laag, E A; Canalizo, G; van Breugel, W; Gates, E L; de Vries, W; Stanford, S A

    2006-03-13

    We present high resolution imaging observations of a sample of previously unidentified far-infrared galaxies at z < 0.3. The objects were selected by cross-correlating the IRAS Faint Source Catalog with the VLA FIRST catalog and the HST Guide Star Catalog to allow for adaptive optics observations. We found two new ULIGs (with L{sub FIR} {ge} 10{sup 12} L{sub {circle_dot}}) and 19 new LIGs (with L{sub FIR} {ge} 10{sup 11} L{sub {circle_dot}}). Twenty of the galaxies in the sample were imaged with either the Lick or Keck adaptive optics systems in H or K{prime}. Galaxy morphologies were determined using the two dimensional fitting program GALFIT and the residuals examined to look for interesting structure. The morphologies reveal that at least 30% are involved in tidal interactions, with 20% being clear mergers. An additional 50% show signs of possible interaction. Line ratios were used to determine powering mechanism; of the 17 objects in the sample showing clear emission lines--four are active galactic nuclei and seven are starburst galaxies. The rest exhibit a combination of both phenomena.

  12. UAV multiple image dense matching based on self-adaptive patch

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Ding, Yazhou; Xiao, Xiongwu; Guo, Bingxuan; Li, Deren; Yang, Nan; Zhang, Weilong; Huang, Xiangxiang; Li, Linhui; Peng, Zhe; Pan, Fei

    2015-12-01

    This article using some state-of-art multi-view dense matching methods for reference, proposes an UAV multiple image dense matching algorithm base on Self-Adaptive patch (UAV-AP) in view of the specialty of UAV images. The main idea of matching propagating based on Self-Adaptive patch is to build patches centered by seed points which are already matched. The extent and figure of the patches can adapt to the terrain relief automatically: when the surface is smooth, the extent of the patch would become bigger to cover the whole smooth terrain; while the terrain is very rough, the extent of the patch would become smaller to describe the details of the surface. With this approach, the UAV image sequences and the given or previously triangulated orientation elements are taken as inputs. The main processing procedures are as follows: (1) multi-view initial feature matching, (2) matching propagating based on Self-Adaptive patch, (3) filtering the erroneous matching points. Finally, the algorithm outputs a dense colored point cloud. Experiments indicate that this method surpassed the existing related algorithm in efficiency and the matching precision is also quite ideal.

  13. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  14. On Processing Hexagonally Sampled Images

    DTIC Science & Technology

    2011-07-01

    two points p1 = (a1,r1, c1 ) and p2 = (a2,r2,c2):       2 21 21 2 21 21 21 2 3 2...rr aa cc aa d pp “City-Block” distance (on the image plane) between two points p1 = (a1,r1, c1 ) and p2 = (a2,r2,c2...A. Approved for public release, distribution unlimited. (96ABW-2011-0325) Neuromorphic Infrared Sensor ( NIFS ) 31 DISTRIBUTION A. Approved

  15. Image processing technology for enhanced situational awareness

    NASA Astrophysics Data System (ADS)

    Page, S. F.; Smith, M. I.; Hickman, D.

    2009-09-01

    This paper discusses the integration of a number of advanced image and data processing technologies in support of the development of next-generation Situational Awareness systems for counter-terrorism and crime fighting applications. In particular, the paper discusses the European Union Framework 7 'SAMURAI' project, which is investigating novel approaches to interactive Situational Awareness using cooperative networks of heterogeneous imaging sensors. Specific focus is given to novel Data Fusion aspects of the research which aim to improve system performance through intelligently fusing both image data and non image data sources, resolving human-machine conflicts, and refining the Situational Awareness picture. In addition, the paper highlights some recent advances in supporting image processing technologies. Finally, future trends in image-based Situational Awareness are identified, such as Post-Event Analysis (also known as 'Back-Tracking'), and the associated technical challenges are discussed.

  16. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images

    PubMed Central

    Cunefare, David; Cooper, Robert F.; Higgins, Brian; Katz, David F.; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-01-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice’s coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice’s coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images. PMID:27231641

  17. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  18. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  19. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  20. Energy preserving QMF for image processing.

    PubMed

    Lian, Jian-ao; Wang, Yonghui

    2014-07-01

    Implementation of new biorthogonal filter banks (BFB) for image compression and denoising is performed, using test images with diversified characteristics. These new BFB’s are linear-phase, have odd lengths, and with a critical feature, namely, the filters preserve signal energy very well. Experimental results show that the proposed filter banks demonstrate promising performance improvement over the filter banks of those widely used in the image processing area, such as the CDF 9/7.

  1. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  2. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  3. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  4. Adaptive HIFU noise cancellation for simultaneous therapy and imaging using an integrated HIFU/imaging transducer

    NASA Astrophysics Data System (ADS)

    Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K. Kirk

    2010-04-01

    It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than -40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of -20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe 'ripples' when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm × 20.0 mm dimensions could

  5. Wavefront sensorless adaptive optics optical coherence tomography for in vivo retinal imaging in mice

    PubMed Central

    Jian, Yifan; Xu, Jing; Gradowski, Martin A.; Bonora, Stefano; Zawadzki, Robert J.; Sarunic, Marinko V.

    2014-01-01

    We present wavefront sensorless adaptive optics (WSAO) Fourier domain optical coherence tomography (FD-OCT) for in vivo small animal retinal imaging. WSAO is attractive especially for mouse retinal imaging because it simplifies optical design and eliminates the need for wavefront sensing, which is difficult in the small animal eye. GPU accelerated processing of the OCT data permitted real-time extraction of image quality metrics (intensity) for arbitrarily selected retinal layers to be optimized. Modal control of a commercially available segmented deformable mirror (IrisAO Inc.) provided rapid convergence using a sequential search algorithm. Image quality improvements with WSAO OCT are presented for both pigmented and albino mouse retinal data, acquired in vivo. PMID:24575347

  6. Wavefront sensorless adaptive optics optical coherence tomography for in vivo retinal imaging in mice.

    PubMed

    Jian, Yifan; Xu, Jing; Gradowski, Martin A; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V

    2014-02-01

    We present wavefront sensorless adaptive optics (WSAO) Fourier domain optical coherence tomography (FD-OCT) for in vivo small animal retinal imaging. WSAO is attractive especially for mouse retinal imaging because it simplifies optical design and eliminates the need for wavefront sensing, which is difficult in the small animal eye. GPU accelerated processing of the OCT data permitted real-time extraction of image quality metrics (intensity) for arbitrarily selected retinal layers to be optimized. Modal control of a commercially available segmented deformable mirror (IrisAO Inc.) provided rapid convergence using a sequential search algorithm. Image quality improvements with WSAO OCT are presented for both pigmented and albino mouse retinal data, acquired in vivo.

  7. Adaptive Imaging Methods using a Rotating Modulation Collimator

    DTIC Science & Technology

    2011-03-01

    an example of two mask designs with similar pitch. The pitch is measured from the left edge of one slit to the left edge of the next slit. The...This block diagram traces the RMC data acquisition process which includes the pulse processing [3...comparison technique developed by Zhou Wang used to measure the relative performance of reconstructed images [5]. The third chapter includes a

  8. Local adaptive approach toward segmentation of microscopic images of activated sludge flocs

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Lo, Po Kim; Yap, Vooi Voon

    2015-11-01

    Activated sludge process is a widely used method to treat domestic and industrial effluents. The conditions of activated sludge wastewater treatment plant (AS-WWTP) are related to the morphological properties of flocs (microbial aggregates) and filaments, and are required to be monitored for normal operation of the plant. Image processing and analysis is a potential time-efficient monitoring tool for AS-WWTPs. Local adaptive segmentation algorithms are proposed for bright-field microscopic images of activated sludge flocs. Two basic modules are suggested for Otsu thresholding-based local adaptive algorithms with irregular illumination compensation. The performance of the algorithms has been compared with state-of-the-art local adaptive algorithms of Sauvola, Bradley, Feng, and c-mean. The comparisons are done using a number of region- and nonregion-based metrics at different microscopic magnifications and quantification of flocs. The performance metrics show that the proposed algorithms performed better and, in some cases, were comparable to the state-of the-art algorithms. The performance metrics were also assessed subjectively for their suitability for segmentations of activated sludge images. The region-based metrics such as false negative ratio, sensitivity, and negative predictive value gave inconsistent results as compared to other segmentation assessment metrics.

  9. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  10. Adapting the Transtheoretical Model of Change to the Bereavement Process

    ERIC Educational Resources Information Center

    Calderwood, Kimberly A.

    2011-01-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals…

  11. Examining Teacher Thinking: Constructing a Process to Design Curricular Adaptations.

    ERIC Educational Resources Information Center

    Udvari-Solner, Alice

    1996-01-01

    This description of a curricular adaptation decision-making process focuses on tenets of reflective practice as teachers design instruction for students in heterogeneous classrooms. A case example illustrates how an elementary teaching team transformed lessons to accommodate a wide range of learners in a multiage first- and second-grade classroom.…

  12. Behavioral training promotes multiple adaptive processes following acute hearing loss

    PubMed Central

    Keating, Peter; Rosenior-Patten, Onayomi; Dahmen, Johannes C; Bell, Olivia; King, Andrew J

    2016-01-01

    The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI: http://dx.doi.org/10.7554/eLife.12264.001 PMID:27008181

  13. Adaptive beamforming for array signal processing in aeroacoustic measurements.

    PubMed

    Huang, Xun; Bai, Long; Vinogradov, Igor; Peers, Edward

    2012-03-01

    Phased microphone arrays have become an important tool in the localization of noise sources for aeroacoustic applications. In most practical aerospace cases the conventional beamforming algorithm of the delay-and-sum type has been adopted. Conventional beamforming cannot take advantage of knowledge of the noise field, and thus has poorer resolution in the presence of noise and interference. Adaptive beamforming has been used for more than three decades to address these issues and has already achieved various degrees of success in areas of communication and sonar. In this work an adaptive beamforming algorithm designed specifically for aeroacoustic applications is discussed and applied to practical experimental data. It shows that the adaptive beamforming method could save significant amounts of post-processing time for a deconvolution method. For example, the adaptive beamforming method is able to reduce the DAMAS computation time by at least 60% for the practical case considered in this work. Therefore, adaptive beamforming can be considered as a promising signal processing method for aeroacoustic measurements.

  14. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Astrophysics Data System (ADS)

    Cho, Y.-C.; McLachlan, B. G.

    1987-10-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  15. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  16. ADAPTIVE OPTICS IMAGES OF KEPLER OBJECTS OF INTEREST

    SciTech Connect

    Adams, E. R.; Dupree, A. K.; Ciardi, D. R.; Gautier, T. N. III; Kulesa, C.; McCarthy, D.

    2012-08-15

    All transiting planets are at risk of contamination by blends with nearby, unresolved stars. Blends dilute the transit signal, causing the planet to appear smaller than it really is, or produce a false-positive detection when the target star is blended with eclipsing binary stars. This paper reports on high spatial-resolution adaptive optics images of 90 Kepler planetary candidates. Companion stars are detected as close as 0.''1 from the target star. Images were taken in the near-infrared (J and Ks bands) with ARIES on the MMT and PHARO on the Palomar Hale 200 inch telescope. Most objects (60%) have at least one star within 6'' separation and a magnitude difference of 9. Eighteen objects (20%) have at least one companion within 2'' of the target star; six companions (7%) are closer than 0.''5. Most of these companions were previously unknown, and the associated planetary candidates should receive additional scrutiny. Limits are placed on the presence of additional companions for every system observed, which can be used to validate planets statistically using the BLENDER method. Validation is particularly critical for low-mass, potentially Earth-like worlds, which are not detectable with current-generation radial velocity techniques. High-resolution images are thus a crucial component of any transit follow-up program.

  17. Adaptive Optics Images of Kepler Objects of Interest

    NASA Astrophysics Data System (ADS)

    Adams, E. R.; Ciardi, D. R.; Dupree, A. K.; Gautier, T. N., III; Kulesa, C.; McCarthy, D.

    2012-08-01

    All transiting planets are at risk of contamination by blends with nearby, unresolved stars. Blends dilute the transit signal, causing the planet to appear smaller than it really is, or produce a false-positive detection when the target star is blended with eclipsing binary stars. This paper reports on high spatial-resolution adaptive optics images of 90 Kepler planetary candidates. Companion stars are detected as close as 0farcs1 from the target star. Images were taken in the near-infrared (J and Ks bands) with ARIES on the MMT and PHARO on the Palomar Hale 200 inch telescope. Most objects (60%) have at least one star within 6'' separation and a magnitude difference of 9. Eighteen objects (20%) have at least one companion within 2'' of the target star; six companions (7%) are closer than 0farcs5. Most of these companions were previously unknown, and the associated planetary candidates should receive additional scrutiny. Limits are placed on the presence of additional companions for every system observed, which can be used to validate planets statistically using the BLENDER method. Validation is particularly critical for low-mass, potentially Earth-like worlds, which are not detectable with current-generation radial velocity techniques. High-resolution images are thus a crucial component of any transit follow-up program. Based on observations obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona.

  18. Keck Adaptive Optics Images of Uranus and Its Rings

    NASA Astrophysics Data System (ADS)

    de Pater, Imke; Gibbard, S. G.; Macintosh, B. A.; Roe, H. G.; Gavel, D. T.; Max, C. E.

    2002-12-01

    We present adaptive optic images of Uranus obtained with the 10-m W. M. Keck II telescope in June 2000, at wavelengths between 1 and 2.4 μm. The angular resolution of the images is ˜0.06-0.09″. We identified eight small cloud features on Uranus's disk, four of which were in the northern hemisphere. The latter features are ˜1000-2000 km in extent and located in the upper troposphere, above the methane cloud, at pressures between 0.5 and 1 bar. Our data have been combined with HST data by Hammel et al. (2001, Icarus153, 229-235); the combination of Keck and HST data allowed derivation of an accurate wind velocity profile. Our images further show Uranus's entire ring system: the asymmetric ɛ ring, as well as the three groups of inner rings (outward from Uranus): the rings 6+5+4, α+β, and the η+γ+δ rings. We derived the equivalent I/ F width and ring particle reflectivity for each group of rings. Typical particle albedos are ˜0.04-0.05, in good agreement with HST data at 0.9 μm.

  19. Adaptive Optics Retinal Imaging – Clinical Opportunities and Challenges

    PubMed Central

    Carroll, Joseph; Kay, David B.; Scoles, Drew; Dubra, Alfredo; Lombardo, Marco

    2014-01-01

    The array of therapeutic options available to clinicians for treating retinal disease is expanding. With these advances comes the need for better understanding of the etiology of these diseases on a cellular level as well as improved non-invasive tools for identifying the best candidates for given therapies and monitoring the efficacy of those therapies. While spectral domain optical coherence tomography (SD-OCT) offers a widely available tool for clinicians to assay the living retina, it suffers from poor lateral resolution due to the eye’s monochromatic aberrations. Adaptive optics (AO) is a technique to compensate for the eye’s aberrations and provide nearly diffraction-limited resolution. The result is the ability to visualize the living retina with cellular resolution. While AO is unquestionably a powerful research tool, many clinicians remain undecided on the clinical potential of AO imaging – putting many at a crossroads with respect to adoption of this technology. This review will briefly summarize the current state of AO retinal imaging, discuss current as well as future clinical applications of AO retinal imaging, and finally provide some discussion of research needs to facilitate more widespread clinical use. PMID:23621343

  20. Adaptive control of surface finish in automated turning processes

    NASA Astrophysics Data System (ADS)

    García-Plaza, E.; Núñez, P. J.; Martín, A. R.; Sanz, A.

    2012-04-01

    The primary aim of this study was to design and develop an on-line control system of finished surfaces in automated machining process by CNC turning. The control system consisted of two basic phases: during the first phase, surface roughness was monitored through cutting force signals; the second phase involved a closed-loop adaptive control system based on data obtained during the monitoring of the cutting process. The system ensures that surfaces roughness is maintained at optimum values by adjusting the feed rate through communication with the PLC of the CNC machine. A monitoring and adaptive control system has been developed that enables the real-time monitoring of surface roughness during CNC turning operations. The system detects and prevents faults in automated turning processes, and applies corrective measures during the cutting process that raise quality and reliability reducing the need for quality control.

  1. Super Resolution Reconstruction Based on Adaptive Detail Enhancement for ZY-3 Satellite Images

    NASA Astrophysics Data System (ADS)

    Zhu, Hong; Song, Weidong; Tan, Hai; Wang, Jingxue; Jia, Di

    2016-06-01

    Super-resolution reconstruction of sequence remote sensing image is a technology which handles multiple low-resolution satellite remote sensing images with complementary information and obtains one or more high resolution images. The cores of the technology are high precision matching between images and high detail information extraction and fusion. In this paper puts forward a new image super resolution model frame which can adaptive multi-scale enhance the details of reconstructed image. First, the sequence images were decomposed into a detail layer containing the detail information and a smooth layer containing the large scale edge information by bilateral filter. Then, a texture detail enhancement function was constructed to promote the magnitude of the medium and small details. Next, the non-redundant information of the super reconstruction was obtained by differential processing of the detail layer, and the initial super resolution construction result was achieved by interpolating fusion of non-redundant information and the smooth layer. At last, the final reconstruction image was acquired by executing a local optimization model on the initial constructed image. Experiments on ZY-3 satellite images of same phase and different phase show that the proposed method can both improve the information entropy and the image details evaluation standard comparing with the interpolation method, traditional TV algorithm and MAP algorithm, which indicate that our method can obviously highlight image details and contains more ground texture information. A large number of experiment results reveal that the proposed method is robust and universal for different kinds of ZY-3 satellite images.

  2. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  3. Graphics processing unit-based quantitative second-harmonic generation imaging

    NASA Astrophysics Data System (ADS)

    Kabir, Mohammad Mahfuzul; Jonayat, ASM; Patel, Sanjay; Toussaint, Kimani C., Jr.

    2014-09-01

    We adapt a graphics processing unit (GPU) to dynamic quantitative second-harmonic generation imaging. We demonstrate the temporal advantage of the GPU-based approach by computing the number of frames analyzed per second from SHG image videos showing varying fiber orientations. In comparison to our previously reported CPU-based approach, our GPU-based image analysis results in ˜10× improvement in computational time. This work can be adapted to other quantitative, nonlinear imaging techniques and provides a significant step toward obtaining quantitative information from fast in vivo biological processes.

  4. Adaptive Signal Processing Testbed signal excision software: User's manual

    NASA Astrophysics Data System (ADS)

    Parliament, Hugh A.

    1992-05-01

    The Adaptive Signal Processing Testbed (ASPT) signal excision software is a set of programs that provide real-time processing functions for the excision of interfering tones from a live spread-spectrum signal as well as off-line functions for the analysis of the effectiveness of the excision technique. The processing functions provided by the ASPT signal excision software are real-time adaptive filtering of live data, storage to disk, and file sorting and conversion. The main off-line analysis function is bit error determination. The purpose of the software is to measure the effectiveness of an adaptive filtering algorithm to suppress interfering or jamming signals in a spread spectrum signal environment. A user manual for the software is provided, containing information on the different software components available to perform signal excision experiments: the real-time excision software, excision host program, file processing utilities, and despreading and bit error rate determination software. In addition, information is presented describing the excision algorithm implemented, the real-time processing framework, the steps required to add algorithms to the system, the processing functions used in despreading, and description of command sequences for post-run analysis of the data.

  5. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  6. Image processing for HTS SQUID probe microscope

    NASA Astrophysics Data System (ADS)

    Hayashi, T.; Koetitz, R.; Itozaki, H.; Ishikawa, T.; Kawabe, U.

    2005-10-01

    An HTS SQUID probe microscope has been developed using a high-permeability needle to enable high spatial resolution measurement of samples in air even at room temperature. Image processing techniques have also been developed to improve the magnetic field images obtained from the microscope. Artifacts in the data occur due to electromagnetic interference from electric power lines, line drift and flux trapping. The electromagnetic interference could successfully be removed by eliminating the noise peaks from the power spectrum of fast Fourier transforms of line scans of the image. The drift between lines was removed by interpolating the mean field value of each scan line. Artifacts in line scans occurring due to flux trapping or unexpected noise were removed by the detection of a sharp drift and interpolation using the line data of neighboring lines. Highly detailed magnetic field images were obtained from the HTS SQUID probe microscope by the application of these image processing techniques.

  7. Multiscale registration of planning CT and daily cone beam CT images for adaptive radiation therapy

    SciTech Connect

    Paquin, Dana; Levy, Doron; Xing Lei

    2009-01-15

    Adaptive radiation therapy (ART) is the incorporation of daily images in the radiotherapy treatment process so that the treatment plan can be evaluated and modified to maximize the amount of radiation dose to the tumor while minimizing the amount of radiation delivered to healthy tissue. Registration of planning images with daily images is thus an important component of ART. In this article, the authors report their research on multiscale registration of planning computed tomography (CT) images with daily cone beam CT (CBCT) images. The multiscale algorithm is based on the hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [Multiscale Model. Simul. 2(4), pp. 554-579 (2004)]. Registration is achieved by decomposing the images to be registered into a series of scales using the (BV, L{sup 2}) decomposition and initially registering the coarsest scales of the image using a landmark-based registration algorithm. The resulting transformation is then used as a starting point to deformably register the next coarse scales with one another. This procedure is iterated at each stage using the transformation computed by the previous scale registration as the starting point for the current registration. The authors present the results of studies of rectum, head-neck, and prostate CT-CBCT registration, and validate their registration method quantitatively using synthetic results in which the exact transformations our known, and qualitatively using clinical deformations in which the exact results are not known.

  8. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  9. Image processing via ultrasonics - Status and promise

    NASA Technical Reports Server (NTRS)

    Kornreich, P. G.; Kowel, S. T.; Mahapatra, A.; Nouhi, A.

    1979-01-01

    Acousto-electric devices for electronic imaging of light are discussed. These devices are more versatile than line scan imaging devices in current use. They have the capability of presenting the image information in a variety of modes. The image can be read out in the conventional line scan mode. It can be read out in the form of the Fourier, Hadamard, or other transform. One can take the transform along one direction of the image and line scan in the other direction, or perform other combinations of image processing functions. This is accomplished by applying the appropriate electrical input signals to the device. Since the electrical output signal of these devices can be detected in a synchronous mode, substantial noise reduction is possible

  10. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  11. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  12. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  13. Adaptive optics retinal imaging in the living mouse eye.

    PubMed

    Geng, Ying; Dubra, Alfredo; Yin, Lu; Merigan, William H; Sharma, Robin; Libby, Richard T; Williams, David R

    2012-04-01

    Correction of the eye's monochromatic aberrations using adaptive optics (AO) can improve the resolution of in vivo mouse retinal images [Biss et al., Opt. Lett. 32(6), 659 (2007) and Alt et al., Proc. SPIE 7550, 755019 (2010)], but previous attempts have been limited by poor spot quality in the Shack-Hartmann wavefront sensor (SHWS). Recent advances in mouse eye wavefront sensing using an adjustable focus beacon with an annular beam profile have improved the wavefront sensor spot quality [Geng et al., Biomed. Opt. Express 2(4), 717 (2011)], and we have incorporated them into a fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). The performance of the instrument was tested on the living mouse eye, and images of multiple retinal structures, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and fluorescently labeled ganglion cells were obtained. The in vivo transverse and axial resolutions of the fluorescence channel of the AOSLO were estimated from the full width half maximum (FWHM) of the line and point spread functions (LSF and PSF), and were found to be better than 0.79 μm ± 0.03 μm (STD)(45% wider than the diffraction limit) and 10.8 μm ± 0.7 μm (STD)(two times the diffraction limit), respectively. The axial positional accuracy was estimated to be 0.36 μm. This resolution and positional accuracy has allowed us to classify many ganglion cell types, such as bistratified ganglion cells, in vivo.

  14. Region-based retrieval of remote sensing image patches with adaptive image segmentation

    NASA Astrophysics Data System (ADS)

    Li, Shijin; Zhu, Jiali; Zhu, Yuelong; Feng, Jun

    2012-06-01

    Over the past four decades, the satellite imaging sensors have acquired huge quantities of Earth- observation data. Content-based image retrieval allows for fast and effective queries of remote sensing images. Here, we take the following two issues into consideration. Firstly, different features and their combination should be chosen for different land covers. Secondly, for the block dividing strategy and the complexities of the remote sensing images, it can not effectively retrieve some small target areas scattered in multiple nontarget blocks. Aiming at the above two issues, a new region-based retrieval method with adaptive image segmentation is proposed. In order to improve the accuracy of remote sensing image segmentation, feature selection and weighing is performed by two-stage clustering, and image segmentation is accomplished based on the chosen features and mean shift procedure. Meanwhile, for the homogeneous characteristics of remote sensing land covers, a new regional representation and matching scheme are adopted to perform image retrieval. Experimental results on retrieving various land covers show that the method can avoid the impact of traditional blocking strategies, and can achieve an average percentage of 19% higher precision with the same level of recall rate, than the relevance feedback method for small target areas.

  15. Medical image classification using spatial adjacent histogram based on adaptive local binary patterns.

    PubMed

    Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling

    2016-05-01

    Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches.

  16. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  17. Use of imaging to assess normal and adaptive muscle function.

    PubMed

    Segal, Richard L

    2007-06-01

    Physical therapists must be able to determine the activity and passive properties of the musculoskeletal system in order to accurately plan and evaluate therapeutic measures. Discussed in this article are imaging methods that not only allow for the measurement of muscle activity but also allow for the measurement of cellular processes and passive mechanical properties noninvasively and in vivo. The techniques reviewed are T1- and T2-weighted magnetic resonance (MR) imaging, MR spectroscopy, cine-phase-contrast MR imaging, MR elastography, and ultrasonography. At present, many of these approaches are expensive and not readily available in physical therapy clinics but can be found at medical centers. However, there are ways of using these techniques to provide important knowledge about muscle function. This article proposes creative ways in which to use these techniques as evaluative tools.

  18. Frequency Adaptability and Waveform Design for OFDM Radar Space-Time Adaptive Processing

    SciTech Connect

    Sen, Satyabrata; Glover, Charles Wayne

    2012-01-01

    We propose an adaptive waveform design technique for an orthogonal frequency division multiplexing (OFDM) radar signal employing a space-time adaptive processing (STAP) technique. We observe that there are inherent variabilities of the target and interference responses in the frequency domain. Therefore, the use of an OFDM signal can not only increase the frequency diversity of our system, but also improve the target detectability by adaptively modifying the OFDM coefficients in order to exploit the frequency-variabilities of the scenario. First, we formulate a realistic OFDM-STAP measurement model considering the sparse nature of the target and interference spectra in the spatio-temporal domain. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. With numerical examples we demonstrate that the resultant OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  19. An Automated Reference Frame Selection (ARFS) Algorithm for Cone Imaging with Adaptive Optics Scanning Light Ophthalmoscopy

    PubMed Central

    Salmon, Alexander E.; Cooper, Robert F.; Langlo, Christopher S.; Baghaie, Ahmadreza; Dubra, Alfredo; Carroll, Joseph

    2017-01-01

    Purpose To develop an automated reference frame selection (ARFS) algorithm to replace the subjective approach of manually selecting reference frames for processing adaptive optics scanning light ophthalmoscope (AOSLO) videos of cone photoreceptors. Methods Relative distortion was measured within individual frames before conducting image-based motion tracking and sorting of frames into distinct spatial clusters. AOSLO images from nine healthy subjects were processed using ARFS and human-derived reference frames, then aligned to undistorted AO-flood images by nonlinear registration and the registration transformations were compared. The frequency at which humans selected reference frames that were rejected by ARFS was calculated in 35 datasets from healthy subjects, and subjects with achromatopsia, albinism, or retinitis pigmentosa. The level of distortion in this set of human-derived reference frames was assessed. Results The average transformation vector magnitude required for registration of AOSLO images to AO-flood images was significantly reduced from 3.33 ± 1.61 pixels when using manual reference frame selection to 2.75 ± 1.60 pixels (mean ± SD) when using ARFS (P = 0.0016). Between 5.16% and 39.22% of human-derived frames were rejected by ARFS. Only 2.71% to 7.73% of human-derived frames were ranked in the top 5% of least distorted frames. Conclusion ARFS outperforms expert observers in selecting minimally distorted reference frames in AOSLO image sequences. The low success rate in human frame choice illustrates the difficulty in subjectively assessing image distortion. Translational Relevance Manual reference frame selection represented a significant barrier to a fully automated image-processing pipeline (including montaging, cone identification, and metric extraction). The approach presented here will aid in the clinical translation of AOSLO imaging. PMID:28392976

  20. Thermodynamic Costs of Information Processing in Sensory Adaptation

    PubMed Central

    Sartori, Pablo; Granger, Léo; Lee, Chiu Fan; Horowitz, Jordan M.

    2014-01-01

    Biological sensory systems react to changes in their surroundings. They are characterized by fast response and slow adaptation to varying environmental cues. Insofar as sensory adaptive systems map environmental changes to changes of their internal degrees of freedom, they can be regarded as computational devices manipulating information. Landauer established that information is ultimately physical, and its manipulation subject to the entropic and energetic bounds of thermodynamics. Thus the fundamental costs of biological sensory adaptation can be elucidated by tracking how the information the system has about its environment is altered. These bounds are particularly relevant for small organisms, which unlike everyday computers, operate at very low energies. In this paper, we establish a general framework for the thermodynamics of information processing in sensing. With it, we quantify how during sensory adaptation information about the past is erased, while information about the present is gathered. This process produces entropy larger than the amount of old information erased and has an energetic cost bounded by the amount of new information written to memory. We apply these principles to the E. coli's chemotaxis pathway during binary ligand concentration changes. In this regime, we quantify the amount of information stored by each methyl group and show that receptors consume energy in the range of the information-theoretic minimum. Our work provides a basis for further inquiries into more complex phenomena, such as gradient sensing and frequency response. PMID:25503948

  1. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  2. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images.

  3. High-accuracy wavefront control for retinal imaging with Adaptive-Influence-Matrix Adaptive Optics

    PubMed Central

    Zou, Weiyao; Burns, Stephen A.

    2010-01-01

    We present an iterative technique for improving adaptive optics (AO) wavefront correction for retinal imaging, called the Adaptive-Influence-Matrix (AIM) method. This method is based on the fact that the deflection-to-voltage relation of common deformable mirrors used in AO are nonlinear, and the fact that in general the wavefront errors of the eye can be considered to be composed of a static, non-zero wavefront error (such as the defocus and astigmatism), and a time-varying wavefront error. The aberrated wavefront is first corrected with a generic influence matrix, providing a mirror compensation figure for the static wavefront error. Then a new influence matrix that is more accurate for the specific static wavefront error is calibrated based on the mirror compensation figure. Experimental results show that with the AIM method the AO wavefront correction accuracy can be improved significantly in comparison to the generic AO correction. The AIM method is most useful in AO modalities where there are large static contributions to the wavefront aberrations. PMID:19997241

  4. Progressive image data compression with adaptive scale-space quantization

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur

    1999-12-01

    Some improvements of embedded zerotree wavelet algorithm are considere. Compression methods tested here are based on dyadic wavelet image decomposition, scalar quantization and coding in progressive fashion. Profitable coders with embedded form of code and rate fixing abilities like Shapiro EZW and Said nad Pearlman SPIHT are modified to improve compression efficiency. We explore the modifications of the initial threshold value, reconstruction levels and quantization scheme in SPIHT algorithm. Additionally, we present the result of the best filter bank selection. The most efficient biorthogonal filter banks are tested. Significant efficiency improvement of SPIHT coder was finally noticed even up to 0.9dB of PSNR in some cases. Because of the problems with optimization of quantization scheme in embedded coder we propose another solution: adaptive threshold selection of wavelet coefficients in progressive coding scheme. Two versions of this coder are tested: progressive in quality and resolution. As a result, improved compression effectiveness is achieved - close to 1.3 dB in comparison to SPIHT for image Barbara. All proposed algorithms are optimized automatically and are not time-consuming. But sometimes the most efficient solution must be found in iterative way. Final results are competitive across the most efficient wavelet coders.

  5. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  6. Palm print image processing with PCNN

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhao, Xianhong

    2010-08-01

    Pulse coupled neural networks (PCNN) is based on Eckhorn's model of cat visual cortex, and imitate mammals visual processing, and palm print has been found as a personal biological feature for a long history. This inspired us with the combination of them: a novel method for palm print processing is proposed, which includes pre-processing and feature extraction of palm print image using PCNN; then the feature of palm print image is used for identifying. Our experiment shows that a verification rate of 87.5% can be achieved at ideal condition. We also find that the verification rate decreases duo to rotate or shift of palm.

  7. Potential of hybrid adaptive filtering in inflammatory lesion detection from capsule endoscopy images

    PubMed Central

    Charisis, Vasileios S; Hadjileontiadis, Leontios J

    2016-01-01

    A new feature extraction technique for the detection of lesions created from mucosal inflammations in Crohn’s disease, based on wireless capsule endoscopy (WCE) images processing is presented here. More specifically, a novel filtering process, namely Hybrid Adaptive Filtering (HAF), was developed for efficient extraction of lesion-related structural/textural characteristics from WCE images, by employing Genetic Algorithms to the Curvelet-based representation of images. Additionally, Differential Lacunarity (DLac) analysis was applied for feature extraction from the HAF-filtered images. The resulted scheme, namely HAF-DLac, incorporates support vector machines for robust lesion recognition performance. For the training and testing of HAF-DLac, an 800-image database was used, acquired from 13 patients who undertook WCE examinations, where the abnormal cases were grouped into mild and severe, according to the severity of the depicted lesion, for a more extensive evaluation of the performance. Experimental results, along with comparison with other related efforts, have shown that the HAF-DLac approach evidently outperforms them in the field of WCE image analysis for automated lesion detection, providing higher classification results, up to 93.8% (accuracy), 95.2% (sensitivity), 92.4% (specificity) and 92.6% (precision). The promising performance of HAF-DLac paves the way for a complete computer-aided diagnosis system that could support physicians’ clinical practice. PMID:27818583

  8. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  9. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  10. Keck Adaptive Optics Imaging of Uranus and its Rings

    NASA Astrophysics Data System (ADS)

    de Pater, Imke; Roe, H.; Macintosh, B.; Gibbard, S.; Max, C.; Gavel, D.

    2000-10-01

    We observed Uranus with the recently commissioned AO/NIRSPEC system (Adaptive Optics system with the Near-Infrared echelle Spectrograph) on the 10-m W.M. Keck telescope, UT June 17 and 18, 2000. NIRSPEC allows one to take images and spectra simultaneously. Here we will discuss the images at wavelengths between 1 and 2.4 micron. Due to the location of the rings' pericenter, the rings were much brighter in the north than the south, which resulted in excellent ring images. Inside of the ɛ ring at least three more (individually slightly resolved) rings are visible: from the outside inwards these are: 1) combined δ ,γ ,η rings, 2) combined β ,α rings, and 3) combined 4,5,6 rings. On the planet itself we detected at least 8 different cloud features, five of which were in the northern hemisphere. Two features could be tracked over a 40-60 degree longitude range, and yield wind velocities of 175 +/- 35 m/s at a latitude of +30o, and of 120 +/- 40 m/s at +40o latitude. The highest latitude reached by HST NICMOS was +27o, where a velocity of 20 m/s was measured (Karkoschka, 1998). Has the wind speed changed? Or is there a very steep gradient in the profile? Our data suggest the wind profile to be similar to that derived for Neptune, though at reduced velocities. This research was supported in part by the STC Program of the National Science Foundation under Agreement No. AST-9876783, and in part under the auspices of the US Department of Energy at Lawrence Livermore National Laboratory, Univ. of Calif. under contract No. W-7405-Eng-48.

  11. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  12. Employing image processing techniques for cancer detection using microarray images.

    PubMed

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively.

  13. Adaptive sampling for learning gaussian processes using mobile sensor networks.

    PubMed

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme.

  14. Adoption: biological and social processes linked to adaptation.

    PubMed

    Grotevant, Harold D; McDermott, Jennifer M

    2014-01-01

    Children join adoptive families through domestic adoption from the public child welfare system, infant adoption through private agencies, and international adoption. Each pathway presents distinctive developmental opportunities and challenges. Adopted children are at higher risk than the general population for problems with adaptation, especially externalizing, internalizing, and attention problems. This review moves beyond the field's emphasis on adoptee-nonadoptee differences to highlight biological and social processes that affect adaptation of adoptees across time. The experience of stress, whether prenatal, postnatal/preadoption, or during the adoption transition, can have significant impacts on the developing neuroendocrine system. These effects can contribute to problems with physical growth, brain development, and sleep, activating cascading effects on social, emotional, and cognitive development. Family processes involving contact between adoptive and birth family members, co-parenting in gay and lesbian adoptive families, and racial socialization in transracially adoptive families affect social development of adopted children into adulthood.

  15. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

    PubMed Central

    Xu, Yunfei; Choi, Jongeun

    2011-01-01

    This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785

  16. Comparison of adaptive optics scanning light ophthalmoscopic fluorescein angiography and offset pinhole imaging.

    PubMed

    Chui, Toco Y P; Dubow, Michael; Pinhas, Alexander; Shah, Nishit; Gan, Alexander; Weitz, Rishard; Sulai, Yusufu N; Dubra, Alfredo; Rosen, Richard B

    2014-04-01

    Recent advances to the adaptive optics scanning light ophthalmoscope (AOSLO) have enabled finer in vivo assessment of the human retinal microvasculature. AOSLO confocal reflectance imaging has been coupled with oral fluorescein angiography (FA), enabling simultaneous acquisition of structural and perfusion images. AOSLO offset pinhole (OP) imaging combined with motion contrast post-processing techniques, are able to create a similar set of structural and perfusion images without the use of exogenous contrast agent. In this study, we evaluate the similarities and differences of the structural and perfusion images obtained by either method, in healthy control subjects and in patients with retinal vasculopathy including hypertensive retinopathy, diabetic retinopathy, and retinal vein occlusion. Our results show that AOSLO OP motion contrast provides perfusion maps comparable to those obtained with AOSLO FA, while AOSLO OP reflectance images provide additional information such as vessel wall fine structure not as readily visible in AOSLO confocal reflectance images. AOSLO OP offers a non-invasive alternative to AOSLO FA without the need for any exogenous contrast agent.

  17. Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking.

    PubMed

    Ferguson, R Daniel; Zhong, Zhangyi; Hammer, Daniel X; Mujat, Mircea; Patel, Ankit H; Deng, Cong; Zou, Weiyao; Burns, Stephen A

    2010-11-01

    We have developed a new, unified implementation of the adaptive optics scanning laser ophthalmoscope (AOSLO) incorporating a wide-field line-scanning ophthalmoscope (LSO) and a closed-loop optical retinal tracker. AOSLO raster scans are deflected by the integrated tracking mirrors so that direct AOSLO stabilization is automatic during tracking. The wide-field imager and large-spherical-mirror optical interface design, as well as a large-stroke deformable mirror (DM), enable the AOSLO image field to be corrected at any retinal coordinates of interest in a field of >25 deg. AO performance was assessed by imaging individuals with a range of refractive errors. In most subjects, image contrast was measurable at spatial frequencies close to the diffraction limit. Closed-loop optical (hardware) tracking performance was assessed by comparing sequential image series with and without stabilization. Though usually better than 10 μm rms, or 0.03 deg, tracking does not yet stabilize to single cone precision but significantly improves average image quality and increases the number of frames that can be successfully aligned by software-based post-processing methods. The new optical interface allows the high-resolution imaging field to be placed anywhere within the wide field without requiring the subject to re-fixate, enabling easier retinal navigation and faster, more efficient AOSLO montage capture and stitching.

  18. Image processing of metal surface with structured light

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Feng, Chang; Wang, Congzheng

    2014-09-01

    In structured light vision measurement system, the ideal image of structured light strip, in addition to black background , contains only the gray information of the position of the stripe. However, the actual image contains image noise, complex background and so on, which does not belong to the stripe, and it will cause interference to useful information. To extract the stripe center of mental surface accurately, a new processing method was presented. Through adaptive median filtering, the noise can be preliminary removed, and the noise which introduced by CCD camera and measured environment can be further removed with difference image method. To highlight fine details and enhance the blurred regions between the stripe and noise, the sharping algorithm is used which combine the best features of Laplacian operator and Sobel operator. Morphological opening operation and closing operation are used to compensate the loss of information.Experimental results show that this method is effective in the image processing, not only to restrain the information but also heighten contrast. It is beneficial for the following processing.

  19. Automated object extraction from remote sensor image based on adaptive thresholding technique

    NASA Astrophysics Data System (ADS)

    Zhao, Tongzhou; Ma, Shuaijun; Li, Jin; Ming, Hui; Luo, Xiaobo

    2009-10-01

    Detection and extraction of the dim moving small objects in the infrared image sequences is an interesting research area. A system for detection of the dim moving small targets in the IR image sequences is presented, and a new algorithm having high performance for extracting moving small targets in infrared image sequences containing cloud clutter is proposed in the paper. This method can get the better detection precision than some other methods, and two independent units can realize the calculative process. The novelty of the algorithm is that it uses adaptive thresholding technique of the moving small targets in both the spatial domain and temporal domain. The results of experiment show that the algorithm we presented has high ratio of detection precision.

  20. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  1. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  2. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    NASA Astrophysics Data System (ADS)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  3. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  4. Radiographic image processing for industrial applications

    NASA Astrophysics Data System (ADS)

    Dowling, Martin J.; Kinsella, Timothy E.; Bartels, Keith A.; Light, Glenn M.

    1998-03-01

    One advantage of working with digital images is the opportunity for enhancement. While it is important to preserve the original image, variations can be generated that yield greater understanding of object properties. It is often possible to effectively increase dynamic range, improve contrast in regions of interest, emphasize subtle features, reduce background noise, and provide more robust detection of faults. This paper describes and illustrates some of these processes using real world examples.

  5. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  6. Adaptive Optics for Satellite Imaging and Space Debris Ranging

    NASA Astrophysics Data System (ADS)

    Bennet, F.; D'Orgeville, C.; Price, I.; Rigaut, F.; Ritchie, I.; Smith, C.

    Earth's space environment is becoming crowded and at risk of a Kessler syndrome, and will require careful management for the future. Modern low noise high speed detectors allow for wavefront sensing and adaptive optics (AO) in extreme circumstances such as imaging small orbiting bodies in Low Earth Orbit (LEO). The Research School of Astronomy and Astrophysics (RSAA) at the Australian National University have been developing AO systems for telescopes between 1 and 2.5m diameter to image and range orbiting satellites and space debris. Strehl ratios in excess of 30% can be achieved for targets in LEO with an AO loop running at 2kHz, allowing the resolution of small features (<30cm) and the capability to determine object shape and spin characteristics. The AO system developed at RSAA consists of a high speed EMCCD Shack-Hartmann wavefront sensor, a deformable mirror (DM), and realtime computer (RTC), and an imaging camera. The system works best as a laser guide star system but will also function as a natural guide star AO system, with the target itself being the guide star. In both circumstances tip-tilt is provided by the target on the imaging camera. The fast tip-tilt modes are not corrected optically, and are instead removed by taking images at a moderate speed (>30Hz) and using a shift and add algorithm. This algorithm can also incorporate lucky imaging to further improve the final image quality. A similar AO system for space debris ranging is also in development in collaboration with Electro Optic Systems (EOS) and the Space Environment Management Cooperative Research Centre (SERC), at the Mount Stromlo Observatory in Canberra, Australia. The system is designed for an AO corrected upward propagated 1064nm pulsed laser beam, from which time of flight information is used to precisely range the target. A 1.8m telescope is used for both propagation and collection of laser light. A laser guide star, Shack-Hartmann wavefront sensor, and DM are used for high order

  7. System identification by video image processing

    NASA Astrophysics Data System (ADS)

    Shinozuka, Masanobu; Chung, Hung-Chi; Ichitsubo, Makoto; Liang, Jianwen

    2001-07-01

    Emerging image processing techniques demonstrate their potential applications in earthquake engineering, particularly in the area of system identification. In this respect, the objectives of this research are to demonstrate the underlying principle that permits system identification, non-intrusively and remotely, with the aid of video camera and, for the purpose of the proof-of-concept, to apply the principle to a system identification problem involving relative motion, on the basis of the images. In structural control, accelerations at different stories of a building are usually measured and fed back for processing and control. As an alternative, this study attempts to identify the relative motion between different stories of a building for the purpose of on-line structural control by digitizing the images taken by video camera. For this purpose, the video image of the vibration of a structure base-isolated by a friction device under shaking-table was used successfully to observe relative displacement between the isolated structure and the shaking-table. This proof-of-concept experiment demonstrates that the proposed identification method based on digital image processing can be used with appropriate modifications to identify many other engineering-wise significant quantities remotely. In addition to the system identification study in the structural dynamics mentioned above, a result of preliminary study is described involving the video imaging of state of crack damage of road and highway pavement.

  8. Adaptive clustering of image database (ACID) as an efficient tool for improving retrieval in a CBIR system.

    PubMed

    Reljin, Branimir; Zajić, Goran; Reljin, Nikola; Reljin, Irini

    2012-01-01

    The paper describes a content-based image retrieval (CBIR) system with relevance feedback (RF). Instead of standard relevance feedback procedure, an adaptive clustering of image database (ACID) according to particular subjective needs is introduced in our system. Images labeled by the user as relevant are collected in clusters, and their representative members are used in further searching procedure instead of all images contained in the database. By this way, some history of previous retrieving is embedded into a searching process enabling faster and more subjective retrieval. Moreover, clusters are adaptively updated after each retrieving session, following actual user's needs. The efficiency of the proposed ACID system is tested with images from Corel and MIT datasets.

  9. Image-based adaptive optics for in vivo imaging in the hippocampus

    PubMed Central

    Champelovier, D.; Teixeira, J.; Conan, J.-M.; Balla, N.; Mugnier, L. M.; Tressard, T.; Reichinnek, S.; Meimon, S.; Cossart, R.; Rigneault, H.; Monneret, S.; Malvache, A.

    2017-01-01

    Adaptive optics is a promising technique for the improvement of microscopy in tissues. A large palette of indirect and direct wavefront sensing methods has been proposed for in vivo imaging in experimental animal models. Application of most of these methods to complex samples suffers from either intrinsic and/or practical difficulties. Here we show a theoretically optimized wavefront correction method for inhomogeneously labeled biological samples. We demonstrate its performance at a depth of 200 μm in brain tissue within a sparsely labeled region such as the pyramidal cell layer of the hippocampus, with cells expressing GCamP6. This method is designed to be sample-independent thanks to an automatic axial locking on objects of interest through the use of an image-based metric that we designed. Using this method, we show an increase of in vivo imaging quality in the hippocampus. PMID:28220868

  10. Image-based adaptive optics for in vivo imaging in the hippocampus

    NASA Astrophysics Data System (ADS)

    Champelovier, D.; Teixeira, J.; Conan, J.-M.; Balla, N.; Mugnier, L. M.; Tressard, T.; Reichinnek, S.; Meimon, S.; Cossart, R.; Rigneault, H.; Monneret, S.; Malvache, A.

    2017-02-01

    Adaptive optics is a promising technique for the improvement of microscopy in tissues. A large palette of indirect and direct wavefront sensing methods has been proposed for in vivo imaging in experimental animal models. Application of most of these methods to complex samples suffers from either intrinsic and/or practical difficulties. Here we show a theoretically optimized wavefront correction method for inhomogeneously labeled biological samples. We demonstrate its performance at a depth of 200 μm in brain tissue within a sparsely labeled region such as the pyramidal cell layer of the hippocampus, with cells expressing GCamP6. This method is designed to be sample-independent thanks to an automatic axial locking on objects of interest through the use of an image-based metric that we designed. Using this method, we show an increase of in vivo imaging quality in the hippocampus.

  11. Self-adaptive image reconstruction inspired by insect compound eye mechanism.

    PubMed

    Zhang, Jiahua; Shi, Aiye; Wang, Xin; Bian, Linjie; Huang, Fengchen; Xu, Lizhong

    2012-01-01

    Inspired by the mechanism of imaging and adaptation to luminosity in insect compound eyes (ICE), we propose an ICE-based adaptive reconstruction method (ARM-ICE), which can adjust the sampling vision field of image according to the environment light intensity. The target scene can be compressive, sampled independently with multichannel through ARM-ICE. Meanwhile, ARM-ICE can regulate the visual field of sampling to control imaging according to the environment light intensity. Based on the compressed sensing joint sparse model (JSM-1), we establish an information processing system of ARM-ICE. The simulation of a four-channel ARM-ICE system shows that the new method improves the peak signal-to-noise ratio (PSNR) and resolution of the reconstructed target scene under two different cases of light intensity. Furthermore, there is no distinct block effect in the result, and the edge of the reconstructed image is smoother than that obtained by the other two reconstruction methods in this work.

  12. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  13. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  14. Edge Detection on Images of Pseudoimpedance Section Supported by Context and Adaptive Transformation Model Images

    NASA Astrophysics Data System (ADS)

    Kawalec-Latała, Ewa

    2014-03-01

    Most of underground hydrocarbon storage are located in depleted natural gas reservoirs. Seismic survey is the most economical source of detailed subsurface information. The inversion of seismic section for obtaining pseudoacoustic impedance section gives the possibility to extract detailed subsurface information. The seismic wavelet parameters and noise briefly influence the resolution. Low signal parameters, especially long signal duration time and the presence of noise decrease pseudoimpedance resolution. Drawing out from measurement or modelled seismic data approximation of distribution of acoustic pseuoimpedance leads us to visualisation and images useful to stratum homogeneity identification goal. In this paper, the improvement of geologic section image resolution by use of minimum entropy deconvolution method before inversion is applied. The author proposes context and adaptive transformation of images and edge detection methods as a way to increase the effectiveness of correct interpretation of simulated images. In the paper, the edge detection algorithms using Sobel, Prewitt, Robert, Canny operators as well as Laplacian of Gaussian method are emphasised. Wiener filtering of image transformation improving rock section structure interpretation pseudoimpedance matrix on proper acoustic pseudoimpedance value, corresponding to selected geologic stratum. The goal of the study is to develop applications of image transformation tools to inhomogeneity detection in salt deposits.

  15. Construction and solution of an adaptive image-restoration model for removing blur and mixed noise

    NASA Astrophysics Data System (ADS)

    Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun

    2016-03-01

    We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.

  16. Speckle reduction in ultrasound medical images using adaptive filter based on second order statistics.

    PubMed

    Thakur, A; Anand, R S

    2007-01-01

    This article discusses an adaptive filtering technique for reducing speckle using second order statistics of the speckle pattern in ultrasound medical images. Several region-based adaptive filter techniques have been developed for speckle noise suppression, but there are no specific criteria for selecting the region growing size in the post processing of the filter. The size appropriate for one local region may not be appropriate for other regions. Selection of the correct region size involves a trade-off between speckle reduction and edge preservation. Generally, a large region size is used to smooth speckle and a small size to preserve the edges into an image. In this paper, a smoothing procedure combines the first order statistics of speckle for the homogeneity test and second order statistics for selection of filters and desired region growth. Grey level co-occurrence matrix (GLCM) is calculated for every region during the region contraction and region growing for second order statistics. Further, these GLCM features determine the appropriate filter for the region smoothing. The performance of this approach is compared with the aggressive region-growing filter (ARGF) using edge preservation and speckle reduction tests. The processed image results show that the proposed method effectively reduces speckle noise and preserves edge details.

  17. Pointwise shape-adaptive DCT for high-quality denoising and deblocking of grayscale and color images.

    PubMed

    Foi, Alessandro; Katkovnik, Vladimir; Egiazarian, Karen

    2007-05-01

    The shape-adaptive discrete cosine transform ISA-DCT) transform can be computed on a support of arbitrary shape, but retains a computational complexity comparable to that of the usual separable block-DCT (B-DCT). Despite the near-optimal decorrelation and energy compaction properties, application of the SA-DCT has been rather limited, targeted nearly exclusively to video compression. In this paper, we present a novel approach to image filtering based on the SA-DCT. We use the SA-DCT in conjunction with the Anisotropic Local Polynomial Approximation-Intersection of Confidence Intervals technique, which defines the shape of the transform's support in a pointwise adaptive manner. The thresholded or attenuated SA-DCT coefficients are used to reconstruct a local estimate of the signal within the adaptive-shape support. Since supports corresponding to different points are in general overlapping, the local estimates are averaged together using adaptive weights that depend on the region's statistics. This approach can be used for various image-processing tasks. In this paper, we consider, in particular, image denoising and image deblocking and deringing from block-DCT compression. A special structural constraint in luminance-chrominance space is also proposed to enable an accurate filtering of color images. Simulation experiments show a state-of-the-art quality of the final estimate, both in terms of objective criteria and visual appearance. Thanks to the adaptive support, reconstructed edges are clean, and no unpleasant ringing artifacts are introduced by the fitted transform.

  18. Results of precision processing (scene correction) of ERTS-1 images using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Bernstein, R.

    1973-01-01

    ERTS-1 MSS and RBV data recorded on computer compatible tapes have been analyzed and processed, and preliminary results have been obtained. No degradation of intensity (radiance) information occurred in implementing the geometric correction. The quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum. Processing times of digitally processed images are about equivalent to the NDPF electro-optical processor.

  19. Chandra Automatic Processing Task Interface: An Adaptable System Architecture

    NASA Astrophysics Data System (ADS)

    Grier, J. D., Jr.; Plummer, D.

    2007-10-01

    The Chandra Automatic Processing Task Interface (CAPTAIN) is an operations interface to Chandra Automatic Processing (AP) that provides detail management and execution of the AP pipelines. In particular, this kind of management is used in Special Automatic Processing (SAP) where there is a need to select specific pipelines that require non-standard handling for reprocessing of a given data set. Standard AP currently contains approximately 200 pipelines with complex interactions between them. As AP has evolved over the life of the mission, so has the number and attributes of these pipelines. As a result, CAPTAIN provides a system architecture capable of managing and adapting to this evolving system. This adaptability has allowed CAPTAIN to also be used to initiate Chandra Source Catalog Automatic Processing (Level~3 AP) and positions it for use with future automatic processing systems. This paper describes the approach to the development of the CAPTAIN system architecture and the maintainable, extensible and reusable software architecture by which it is implemented.

  20. Digital image database processing to simulate image formation in ideal lighting conditions of the human eye

    NASA Astrophysics Data System (ADS)

    Castañeda-Santos, Jessica; Santiago-Alvarado, Agustin; Cruz-Félix, Angel S.; Hernández-Méndez, Arturo

    2015-09-01

    The pupil size of the human eye has a large effect in the image quality due to inherent aberrations. Several studies have been performed to calculate its size relative to the luminance as well as considering other factors, i.e., age, size of the adapting field and mono and binocular vision. Moreover, ideal lighting conditions are known, but software suited to our specific requirements, low cost and low computational consumption, in order to simulate radiation adaptation and image formation in the retina with ideal lighting conditions has not yet been developed. In this work, a database is created consisting of 70 photographs corresponding to the same scene with a fixed target at different times of the day. By using this database, characteristics of the photographs are obtained by measuring the luminance average initial threshold value of each photograph by means of an image histogram. Also, we present the implementation of a digital filter for both, image processing on the threshold values of our database and generating output images with the threshold values reported for the human eye in ideal cases. Some potential applications for this kind of filters may be used in artificial vision systems.

  1. Adaptive processes drive ecomorphological convergent evolution in antwrens (Thamnophilidae).

    PubMed

    Bravo, Gustavo A; Remsen, J V; Brumfield, Robb T

    2014-10-01

    Phylogenetic niche conservatism (PNC) and convergence are contrasting evolutionary patterns that describe phenotypic similarity across independent lineages. Assessing whether and how adaptive processes give origin to these patterns represent a fundamental step toward understanding phenotypic evolution. Phylogenetic model-based approaches offer the opportunity not only to distinguish between PNC and convergence, but also to determine the extent that adaptive processes explain phenotypic similarity. The Myrmotherula complex in the Neotropical family Thamnophilidae is a polyphyletic group of sexually dimorphic small insectivorous forest birds that are relatively homogeneous in size and shape. Here, we integrate a comprehensive species-level molecular phylogeny of the Myrmotherula complex with morphometric and ecological data within a comparative framework to test whether phenotypic similarity is described by a pattern of PNC or convergence, and to identify evolutionary mechanisms underlying body size and shape evolution. We show that antwrens in the Myrmotherula complex represent distantly related clades that exhibit adaptive convergent evolution in body size and divergent evolution in body shape. Phenotypic similarity in the group is primarily driven by their tendency to converge toward smaller body sizes. Differences in body size and shape across lineages are associated to ecological and behavioral factors.

  2. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  3. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  4. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  5. [Image processing of early gastric cancer cases].

    PubMed

    Inamoto, K; Umeda, T; Inamura, K

    1992-11-25

    Computer image processing was used to enhance gastric lesions in order to improve the detection of stomach cancer. Digitization was performed in 25 cases of early gastric cancer that had been confirmed surgically and pathologically. The image processing consisted of grey scale transformation, edge enhancement (Sobel operator), and high-pass filtering (unsharp masking). Gery scale transformation improved image quality for the detection of gastric lesions. The Sobel operator enhanced linear and curved margins, and consequently, suppressed the rest. High-pass filtering with unsharp masking was superior to visualization of the texture pattern on the mucosa. Eight of 10 small lesions (less than 2.0 cm) were successfully demonstrated. However, the detection of two lesions in the antrum, was difficult even with the aid of image enhancement. In the other 15 lesions (more than 2.0 cm), the tumor surface pattern and margin between the tumor and non-pathological mucosa were clearly visualized. Image processing was considered to contribute to the detection of small early gastric cancer lesions by enhancing the pathological lesions.

  6. Adapting high-resolution speckle imaging to moving targets and platforms

    SciTech Connect

    Carrano, C J; Brase, J M

    2004-02-05

    High-resolution surveillance imaging with apertures greater than a few inches over horizontal or slant paths at optical or infrared wavelengths will typically be limited by atmospheric aberrations. With static targets and static platforms, we have previously demonstrated near-diffraction limited imaging of various targets including personnel and vehicles over horizontal and slant paths ranging from less than a kilometer to many tens of kilometers using adaptations to bispectral speckle imaging techniques. Nominally, these image processing methods require the target to be static with respect to its background during the data acquisition since multiple frames are required. To obtain a sufficient number of frames and also to allow the atmosphere to decorrelate between frames, data acquisition times on the order of one second are needed. Modifications to the original imaging algorithm will be needed to deal with situations where there is relative target to background motion. In this paper, we present an extension of these imaging techniques to accommodate mobile platforms and moving targets.

  7. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system.

  8. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-01-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  9. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  10. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms.

  11. Adaptive optics images. III. 87 Kepler objects of interest

    SciTech Connect

    Dressing, Courtney D.; Dupree, Andrea K.; Adams, Elisabeth R.; Kulesa, Craig; McCarthy, Don

    2014-11-01

    The Kepler mission has revolutionized our understanding of exoplanets, but some of the planet candidates identified by Kepler may actually be astrophysical false positives or planets whose transit depths are diluted by the presence of another star. Adaptive optics images made with ARIES at the MMT of 87 Kepler Objects of Interest place limits on the presence of fainter stars in or near the Kepler aperture. We detected visual companions within 1'' for 5 stars, between 1'' and 2'' for 7 stars, and between 2'' and 4'' for 15 stars. For those systems, we estimate the brightness of companion stars in the Kepler bandpass and provide approximate corrections to the radii of associated planet candidates due to the extra light in the aperture. For all stars observed, we report detection limits on the presence of nearby stars. ARIES is typically sensitive to stars approximately 5.3 Ks magnitudes fainter than the target star within 1'' and approximately 5.7 Ks magnitudes fainter within 2'', but can detect stars as faint as ΔKs = 7.5 under ideal conditions.

  12. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  13. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  14. Image classification with densely sampled image windows and generalized adaptive multiple kernel learning.

    PubMed

    Yan, Shengye; Xu, Xinxing; Xu, Dong; Lin, Stephen; Li, Xuelong

    2015-03-01

    We present a framework for image classification that extends beyond the window sampling of fixed spatial pyramids and is supported by a new learning algorithm. Based on the observation that fixed spatial pyramids sample a rather limited subset of the possible image windows, we propose a method that accounts for a comprehensive set of windows densely sampled over location, size, and aspect ratio. A concise high-level image feature is derived to effectively deal with this large set of windows, and this higher level of abstraction offers both efficient handling of the dense samples and reduced sensitivity to misalignment. In addition to dense window sampling, we introduce generalized adaptive l(p)-norm multiple kernel learning (GA-MKL) to learn a robust classifier based on multiple base kernels constructed from the new image features and multiple sets of prelearned classifiers from other classes. With GA-MKL, multiple levels of image features are effectively fused, and information is shared among different classifiers. Extensive evaluation on benchmark datasets for object recognition (Caltech256 and Caltech101) and scene recognition (15Scenes) demonstrate that the proposed method outperforms the state-of-the-art under a broad range of settings.

  15. Wavelet-aided pavement distress image processing

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2003-11-01

    A wavelet-based pavement distress detection and evaluation method is proposed. This method consists of two main parts, real-time processing for distress detection and offline processing for distress evaluation. The real-time processing part includes wavelet transform, distress detection and isolation, and image compression and noise reduction. When a pavement image is decomposed into different frequency subbands by wavelet transform, the distresses, which are usually irregular in shape, appear as high-amplitude wavelet coefficients in the high-frequency details subbands, while the background appears in the low-frequency approximation subband. Two statistical parameters, high-amplitude wavelet coefficient percentage (HAWCP) and high-frequency energy percentage (HFEP), are established and used as criteria for real-time distress detection and distress image isolation. For compression of isolated distress images, a modified EZW (Embedded Zerotrees of Wavelet coding) is developed, which can simultaneously compress the images and reduce the noise. The compressed data are saved to the hard drive for further analysis and evaluation. The offline processing includes distress classification, distress quantification, and reconstruction of the original image for distress segmentation, distress mapping, and maintenance decision-making. The compressed data are first loaded and decoded to obtain wavelet coefficients. Then Radon transform is then applied and the parameters related to the peaks in the Radon domain are used for distress classification. For distress quantification, a norm is defined that can be used as an index for evaluating the severity and extent of the distress. Compared to visual or manual inspection, the proposed method has the advantages of being objective, high-speed, safe, automated, and applicable to different types of pavements and distresses.

  16. Hemispheric superiority for processing a mirror image.

    PubMed

    Garren, R B; Gehlsen, G M

    1981-04-01

    39 adult subjects were administered a test using tachistoscopic half-field presentations to determine hemispheric dominance and a mirror-tracing task to determine if an hemispheric superiority exists for processing a mirror-image. The results indicate superiority of the nondominant hemisphere for this task.

  17. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    Computer," Byte, 3: 14-25 (December 1978). McGraw-Hill, 1985 24. Trussell, H. Joel . "Processing of X-ray Images," Proceedings of the IEEE, 69: 615-627...Services Electronics Program contract N00014-79-C-0424 (AD-085-846). 107 Therrien , Charles W. et al. "A Multiprocessor System for Simulation of

  18. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  19. PYNPOINT: an image processing package for finding exoplanets

    NASA Astrophysics Data System (ADS)

    Amara, Adam; Quanz, Sascha P.

    2012-12-01

    We present the scientific performance results of PYNPOINT, our Python-based software package that uses principal component analysis to detect and estimate the flux of exoplanets in two-dimensional imaging data. Recent advances in adaptive optics and imaging technology at visible and infrared wavelengths have opened the door to direct detections of planetary companions to nearby stars, but image processing techniques have yet to be optimized. We show that the performance of our approach gives a marked improvement over what is presently possible using existing methods such as LOCI. To test our approach, we use real angular differential imaging (ADI) data taken with the adaptive optics-assisted high resolution near-infrared camera NACO at the VLT. These data were taken during the commissioning of the apodizing phase plate (APP) coronagraph. By inserting simulated planets into these data, we test the performance of our method as a function of planet brightness for different positions on the image. We find that in all cases PYNPOINT has a detection threshold that is superior to that given by our LOCI analysis when assessed in a common statistical framework. We obtain our best improvements for smaller inner working angles (IWAs). For an IWA of ˜0.29 arcsec we find that we achieve a detection sensitivity that is a factor of 5 better than LOCI. We also investigate our ability to correctly measure the flux of planets. Again, we find improvements over LOCI, with PYNPOINT giving more stable results. Finally, we apply our package to a non-APP data set of the exoplanet β Pictoris b and reveal the planet with high signal-to-noise. This confirms that PYNPOINT can potentially be applied with high fidelity to a wide range of high-contrast imaging data sets.

  20. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.