Sample records for image processing functions

  1. Form or function: Does focusing on body functionality protect women from body dissatisfaction when viewing media images?

    PubMed

    Mulgrew, Kate E; Tiggemann, Marika

    2018-01-01

    We examined whether shifting young women's ( N =322) attention toward functionality components of media-portrayed idealized images would protect against body dissatisfaction. Image type was manipulated via images of models in either an objectified body-as-object form or active body-as-process form; viewing focus was manipulated via questions about the appearance or functionality of the models. Social comparison was examined as a moderator. Negative outcomes were most pronounced within the process-related conditions (body-as-process images or functionality viewing focus) and for women who reported greater functionality comparison. Results suggest that functionality-based depictions, reflections, and comparisons may actually produce worse outcomes than those based on appearance.

  2. OIPAV: an integrated software system for ophthalmic image processing, analysis and visualization

    NASA Astrophysics Data System (ADS)

    Zhang, Lichun; Xiang, Dehui; Jin, Chao; Shi, Fei; Yu, Kai; Chen, Xinjian

    2018-03-01

    OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.

  3. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  4. Advanced Land Imager Assessment System

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Choate, Mike; Christopherson, Jon; Hollaren, Doug; Morfitt, Ron; Nelson, Jim; Nelson, Shar; Storey, James; Helder, Dennis; Ruggles, Tim; hide

    2008-01-01

    The Advanced Land Imager Assessment System (ALIAS) supports radiometric and geometric image processing for the Advanced Land Imager (ALI) instrument onboard NASA s Earth Observing-1 (EO-1) satellite. ALIAS consists of two processing subsystems for radiometric and geometric processing of the ALI s multispectral imagery. The radiometric processing subsystem characterizes and corrects, where possible, radiometric qualities including: coherent, impulse; and random noise; signal-to-noise ratios (SNRs); detector operability; gain; bias; saturation levels; striping and banding; and the stability of detector performance. The geometric processing subsystem and analysis capabilities support sensor alignment calibrations, sensor chip assembly (SCA)-to-SCA alignments and band-to-band alignment; and perform geodetic accuracy assessments, modulation transfer function (MTF) characterizations, and image-to-image characterizations. ALIAS also characterizes and corrects band-toband registration, and performs systematic precision and terrain correction of ALI images. This system can geometrically correct, and automatically mosaic, the SCA image strips into a seamless, map-projected image. This system provides a large database, which enables bulk trending for all ALI image data and significant instrument telemetry. Bulk trending consists of two functions: Housekeeping Processing and Bulk Radiometric Processing. The Housekeeping function pulls telemetry and temperature information from the instrument housekeeping files and writes this information to a database for trending. The Bulk Radiometric Processing function writes statistical information from the dark data acquired before and after the Earth imagery and the lamp data to the database for trending. This allows for multi-scene statistical analyses.

  5. Subband/Transform MATLAB Functions For Processing Images

    NASA Technical Reports Server (NTRS)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  6. Photogrammetry Toolbox Reference Manual

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Burner, Alpheus W.

    2014-01-01

    Specialized photogrammetric and image processing MATLAB functions useful for wind tunnel and other ground-based testing of aerospace structures are described. These functions include single view and multi-view photogrammetric solutions, basic image processing to determine image coordinates, 2D and 3D coordinate transformations and least squares solutions, spatial and radiometric camera calibration, epipolar relations, and various supporting utility functions.

  7. The Application of Special Computing Techniques to Speed-Up Image Feature Extraction and Processing Techniques.

    DTIC Science & Technology

    1981-12-01

    ocessors has led to the possibility of implementing a large number of image processing functions in near real time . ~CC~ jnro _ j:% UNLSSFE (b-.YC ASIIAINO...to the possibility of implementing a large number of image processing functions in near " real - time ," a result which is essential to establishing a...for example, and S) rapid image handling for near real - time in- teraction by a user at a display. For example, for a large resolution image, say

  8. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1992-11-01

    The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.

  9. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1991-11-01

    The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.

  10. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1983-05-03

    Processing, White-Light Holography, Image Subtraction, Image Deblurring , Coherence Requirement, Apparent Transfer Function, Source Encoding, Signal...in this period, also demonstrated several color image processing capabilities. Among those are broadband color image deblurring and color image...Broadband Image Deblurring ..... ......... 6 2.5 Color Image Subtraction ............... 7 2.6 Rainbow Holographic Aberrations . . ..... 7 2.7

  11. Functional Imaging Biomarkers: Potential to Guide an Individualised Approach to Radiotherapy.

    PubMed

    Prestwich, R J D; Vaidyanathan, S; Scarsbrook, A F

    2015-10-01

    The identification of robust prognostic and predictive biomarkers would transform the ability to implement an individualised approach to radiotherapy. In this regard, there has been a surge of interest in the use of functional imaging to assess key underlying biological processes within tumours and their response to therapy. Importantly, functional imaging biomarkers hold the potential to evaluate tumour heterogeneity/biology both spatially and temporally. An ever-increasing range of functional imaging techniques is now available primarily involving positron emission tomography and magnetic resonance imaging. Small-scale studies across multiple tumour types have consistently been able to correlate changes in functional imaging parameters during radiotherapy with disease outcomes. Considerable challenges remain before the implementation of functional imaging biomarkers into routine clinical practice, including the inherent temporal variability of biological processes within tumours, reproducibility of imaging, determination of optimal imaging technique/combinations, timing during treatment and design of appropriate validation studies. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  12. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    PubMed

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  13. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  14. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  15. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    NASA Astrophysics Data System (ADS)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  16. Curve and Polygon Evolution Techniques for Image Processing

    DTIC Science & Technology

    2002-01-01

    iterative image registration technique with an application to stereo vision. IJCAI, pages 674–679, 1981. 127 [93] R . Malladi , J.A. Sethian, and B.C...Notation A digital image to be processed is a 2-Dimensional (2-D) function denoted by I , I : ! R , where R2 is the domain of the function. Processing a...function Io(x; y), which depends on two spatial variables, x 2 R , and y 2 R , via a partial differential equation (PDE) takes the form; It = A(I; Ix

  17. Software architecture for intelligent image processing using Prolog

    NASA Astrophysics Data System (ADS)

    Jones, Andrew C.; Batchelor, Bruce G.

    1994-10-01

    We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.

  18. Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.

    PubMed

    Ozaki, Nobuyuki

    2002-07-01

    This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.

  19. Image2000: A Free, Innovative, Java Based Imaging Package

    NASA Technical Reports Server (NTRS)

    Pell, Nicholas; Wheeler, Phil; Cornwell, Carl; Matusow, David; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center's (GSFC) Scientific and Educational Endeavors (SEE) and the Center for Image Processing in Education (CIPE) use satellite image processing as part of their science lessons developed for students and educators. The image processing products that they use, as part of these lessons, no longer fulfill the needs of SEE and CIPE because these products are either dependent on a particular computing platform, hard to customize and extend, or do not have enough functionality. SEE and CIPE began looking for what they considered the "perfect" image processing tool that was platform independent, rich in functionality and could easily be extended and customized for their purposes. At the request of SEE, NASA's GSFC, code 588 the Advanced Architectures and Automation Branch developed a powerful new Java based image processing endeavors.

  20. ISLE (Image and Signal Processing LISP Environment) reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherwood, R.J.; Searfus, R.M.

    1990-01-01

    ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply themore » algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.« less

  1. Generalized Newton Method for Energy Formulation in Image Processing

    DTIC Science & Technology

    2008-04-01

    A. Brook, N. Sochen, and N. Kiryati. Deblurring of color images corrupted by impulsive noise . IEEE Transactions on Image Processing, 16(4):1101–1111...tive functionals: variational image deblurring and geodesic active contours for image segmentation. We show that in addition to the fast convergence...inner product, active contours, deblurring . AMS subject classifications. 35A15, 65K10, 90C53 1. Introduction. Optimization of a cost functional is a

  2. Spatial Standard Observer

    NASA Technical Reports Server (NTRS)

    Watson, Andrw B. (Inventor)

    2010-01-01

    The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image. or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image . Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer. SSO. Some embodiments include masking functions. window functions. special treatment for images lying on or near border and pre-processing of test images.

  3. Spatial Standard Observer

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2012-01-01

    The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image, or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image. Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer, SSO. Some embodiments include masking functions, window functions, special treatment for images lying on or near borders and pre-processing of test images.

  4. Structural and functional correlates for language efficiency in auditory word processing.

    PubMed

    Jung, JeYoung; Kim, Sunmi; Cho, Hyesuk; Nam, Kichun

    2017-01-01

    This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally.

  5. Structural and functional correlates for language efficiency in auditory word processing

    PubMed Central

    Kim, Sunmi; Cho, Hyesuk; Nam, Kichun

    2017-01-01

    This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally. PMID:28892503

  6. ISLE (Image and Signal LISP Environment): A functional language interface for signal and image processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azevedo, S.G.; Fitch, J.P.

    1987-10-21

    Conventional software interfaces that use imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal-processing software (SIG). As an alternative, ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal LISP Environment (ISLE) is an example of an interpreted functional language interface based on common LISP. Advantages of ISLE include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligence (AI)more » software. 10 refs.« less

  7. ISLE (Image and Signal Lisp Environment): A functional language interface for signal and image processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azevedo, S.G.; Fitch, J.P.

    1987-05-01

    Conventional software interfaces which utilize imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal processing software (SIG). Existing ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal Lisp Environment (ISLE) will be discussed as an example of an interpreted functional language interface based on Common LISP. Additional benefits include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligencemore » software.« less

  8. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  9. Beyond arousal and valence: the importance of the biological versus social relevance of emotional stimuli.

    PubMed

    Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara

    2012-03-01

    The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention, memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that (1) biologically emotional images hold attention more strongly than do socially emotional images, (2) memory for biologically emotional images was enhanced even with limited cognitive resources, but (3) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images' subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in the visual cortex and greater functional connectivity between the amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in the medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between the amygdala and MPFC than did biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity.

  10. Image processing via VLSI: A concept paper

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1982-01-01

    Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.

  11. Hardware accelerator of convolution with exponential function for image processing applications

    NASA Astrophysics Data System (ADS)

    Panchenko, Ivan; Bucha, Victor

    2015-12-01

    In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.

  12. Transplant Image Processing Technology under Windows into the Platform Based on MiniGUI

    NASA Astrophysics Data System (ADS)

    Gan, Lan; Zhang, Xu; Lv, Wenya; Yu, Jia

    MFC has a large number of digital image processing-related API functions, object-oriented and class mechanisms which provides image processing technology strong support in Windows. But in embedded systems, image processing technology dues to the restrictions of hardware and software do not have the environment of MFC in Windows. Therefore, this paper draws on the experience of image processing technology of Windows and transplants it into MiniGUI embedded systems. The results show that MiniGUI/Embedded graphical user interface applications about image processing which used in embedded image processing system can be good results.

  13. Beyond arousal and valence: The importance of the biological versus social relevance of emotional stimuli

    PubMed Central

    Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara

    2012-01-01

    The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention; memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that: a) biologically emotional images hold attention more strongly than socially emotional images, b) memory for biologically emotional images was enhanced even with limited cognitive resources, but c) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images’ subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in visual cortex and greater functional connectivity between amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between amygdala and MPFC than biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity. PMID:21964552

  14. Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Voos, Avery; Pelphrey, Kevin

    2013-01-01

    Functional magnetic resonance imaging (fMRI), with its excellent spatial resolution and ability to visualize networks of neuroanatomical structures involved in complex information processing, has become the dominant technique for the study of brain function and its development. The accessibility of in-vivo pediatric brain-imaging techniques…

  15. The Pan-STARRS PS1 Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  16. Developing Matlab scripts for image analysis and quality assessment

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, A. D.

    2011-11-01

    Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.

  17. Architecture for a PACS primary diagnosis workstation

    NASA Astrophysics Data System (ADS)

    Shastri, Kaushal; Moran, Byron

    1990-08-01

    A major factor in determining the overall utility of a medical Picture Archiving and Communications (PACS) system is the functionality of the diagnostic workstation. Meyer-Ebrecht and Wendler [1] have proposed a modular picture computer architecture with high throughput and Perry et.al [2] have defined performance requirements for radiology workstations. In order to be clinically useful, a primary diagnosis workstation must not only provide functions of current viewing systems (e.g. mechanical alternators [3,4]) such as acceptable image quality, simultaneous viewing of multiple images, and rapid switching of image banks; but must also provide a diagnostic advantage over the current systems. This includes window-level functions on any image, simultaneous display of multi-modality images, rapid image manipulation, image processing, dynamic image display (cine), electronic image archival, hardcopy generation, image acquisition, network support, and an easy user interface. Implementation of such a workstation requires an underlying hardware architecture which provides high speed image transfer channels, local storage facilities, and image processing functions. This paper describes the hardware architecture of the Siemens Diagnostic Reporting Console (DRC) which meets these requirements.

  18. A noncoherent optical analog image processor.

    PubMed

    Swindell, W

    1970-11-01

    The description of a machine that performs a variety of image processing operations is given, together with a theoretical discussion of its operation. Spatial processing is performed by corrective convolution techniques. Density processing is achieved by means of an electrical transfer function generator included in the video circuit. Examples of images processed for removal of image motion blur, defocus, and atmospheric seeing blur are shown.

  19. Brain Activation during Semantic Processing in Autism Spectrum Disorders via Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Harris, Gordon J.; Chabris, Christopher F.; Clark, Jill; Urban, Trinity; Aharon, Itzhak; Steele, Shelley; McGrath, Lauren; Condouris, Karen; Tager-Flusberg, Helen

    2006-01-01

    Language and communication deficits are core features of autism spectrum disorders (ASD), even in high-functioning adults with ASD. This study investigated brain activation patterns using functional magnetic resonance imaging in right-handed adult males with ASD and a control group, matched on age, handedness, and verbal IQ. Semantic processing in…

  20. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  1. A Versatile Image Processor For Digital Diagnostic Imaging And Its Application In Computed Radiography

    NASA Astrophysics Data System (ADS)

    Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.

    1986-06-01

    In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.

  2. Voyager image processing at the Image Processing Laboratory

    NASA Astrophysics Data System (ADS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-09-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  3. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  4. Position Estimation Using Image Derivative

    NASA Technical Reports Server (NTRS)

    Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato

    2015-01-01

    This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.

  5. An integral design strategy combining optical system and image processing to obtain high resolution images

    NASA Astrophysics Data System (ADS)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  6. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  7. Attentional and physiological processing of food images in functional dyspepsia patients: A pilot study.

    PubMed

    Lee, In-Seon; Preissl, Hubert; Giel, Katrin; Schag, Kathrin; Enck, Paul

    2018-01-23

    The food-related behavior of functional dyspepsia has been attracting more interest of late. This pilot study aims to provide evidence of the physiological, emotional, and attentional aspects of food processing in functional dyspepsia patients. The study was performed in 15 functional dyspepsia patients and 17 healthy controls after a standard breakfast. We measured autonomic nervous system activity using skin conductance response and heart rate variability, emotional response using facial electromyography, and visual attention using eyetracking during the visual stimuli of food/non-food images. In comparison to healthy controls, functional dyspepsia patients showed a greater craving for food, a decreased intake of food, more dyspeptic symptoms, lower pleasantness rating of food images (particularly of high fat), decreased low frequency/high frequency ratio of heart rate variability, and suppressed total processing time of food images. There were no significant differences of skin conductance response and facial electromyography data between groups. The results suggest that high level cognitive functions rather than autonomic and emotional mechanisms are more liable to function differently in functional dyspepsia patients. Abnormal dietary behavior, reduced subjective rating of pleasantness and visual attention to food should be considered as important pathophysiological characteristics in functional dyspepsia.

  8. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.

  9. Neural Correlates of Symptom Dimensions in Pediatric Obsessive-Compulsive Disorder: A Functional Magnetic Resonance Imaging Study

    ERIC Educational Resources Information Center

    Gilbert, Andrew R.; Akkal, Dalila; Almeida, Jorge R. C.; Mataix-Cols, David; Kalas, Catherine; Devlin, Bernie; Birmaher, Boris; Phillips, Mary L.

    2009-01-01

    The use of functional magnetic resonance imaging on a group of pediatric subjects with obsessive compulsive disorder reveals that this group has reduced activity in neural regions underlying emotional processing, cognitive processing, and motor performance as compared to control subjects.

  10. Surface regions of illusory images are detected with a slower processing speed than those of luminance-defined images.

    PubMed

    Mihaylova, Milena; Manahilov, Velitchko

    2010-11-24

    Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.

  11. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment.

    PubMed

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen

    2016-11-01

    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  12. Automatic Feature Extraction System.

    DTIC Science & Technology

    1982-12-01

    exploitation. It was used for * processing of black and white and multispectral reconnaissance photography, side-looking synthetic aperture radar imagery...the image data and different software modules for image queing and formatting, the result of the input process will be images in standard AFES file...timely manner. The FFS configuration provides the environment necessary for integrated testing of image processing functions and design and

  13. Anatomic and functional imaging of tagged molecules in animals

    DOEpatents

    Weisenberger, Andrew G [Yorktown, VA; Majewski, Stanislaw [Grafton, VA; Paulus, Michael J [Knoxville, TN; Gleason, Shaun S [Knoxville, VA

    2007-04-24

    A novel functional imaging system for use in the imaging of unrestrained and non-anesthetized small animals or other subjects and a method for acquiring such images and further registering them with anatomical X-ray images previously or subsequently acquired. The apparatus comprises a combination of an IR laser profilometry system and gamma, PET and/or SPECT, imaging system, all mounted on a rotating gantry, that permits simultaneous acquisition of positional and orientational information and functional images of an unrestrained subject that are registered, i.e. integrated, using image processing software to produce a functional image of the subject without the use of restraints or anesthesia. The functional image thus obtained can be registered with a previously or subsequently obtained X-ray CT image of the subject. The use of the system described herein permits functional imaging of a subject in an unrestrained/non-anesthetized condition thereby reducing the stress on the subject and eliminating any potential interference with the functional testing that such stress might induce.

  14. Resting-state blood oxygen level-dependent functional magnetic resonance imaging for presurgical planning.

    PubMed

    Kamran, Mudassar; Hacker, Carl D; Allen, Monica G; Mitchell, Timothy J; Leuthardt, Eric C; Snyder, Abraham Z; Shimony, Joshua S

    2014-11-01

    Resting-state functional MR imaging (rsfMR imaging) measures spontaneous fluctuations in the blood oxygen level-dependent (BOLD) signal and can be used to elucidate the brain's functional organization. It is used to simultaneously assess multiple distributed resting-state networks. Unlike task-based functional MR imaging, rsfMR imaging does not require task performance. This article presents a brief introduction of rsfMR imaging processing methods followed by a detailed discussion on the use of rsfMR imaging in presurgical planning. Example cases are provided to highlight the strengths and limitations of the technique. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    NASA Astrophysics Data System (ADS)

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system.

  16. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    PubMed Central

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system. PMID:28106129

  17. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  18. Counting Craters on MOC Images: Production Functions and Other Complications

    NASA Technical Reports Server (NTRS)

    Plaut, J. J.

    2001-01-01

    New crater counts on MOC images and associated Viking Orbiter images are used to address the issue of the crater production function at Mars, and to infer aspects of resurfacing processes. Additional information is contained in the original extended abstract.

  19. The Generation and Maintenance of Visual Mental Images: Evidence from Image Type and Aging

    ERIC Educational Resources Information Center

    De Beni, Rossana; Pazzaglia, Francesca; Gardini, Simona

    2007-01-01

    Imagery is a multi-componential process involving different mental operations. This paper addresses whether separate processes underlie the generation, maintenance and transformation of mental images or whether these cognitive processes rely on the same mental functions. We also examine the influence of age on these mental operations for…

  20. [A solution for display and processing of DICOM images in web PACS].

    PubMed

    Xue, Wei-jing; Lu, Wen; Wang, Hai-yang; Meng, Jian

    2009-03-01

    Use the technique of Java Applet to realize the supporting of DICOM image in ordinary Web browser, thereby to expand the processing function of medical image. First analyze the format of DICOM file and design a class which can acquire the pixels, then design two Applet classes, of which one is used to disposal the DICOM image, the other is used to display DICOM image that have been disposaled in the first Applet. They all embedded in the View page, and they communicate by Applet Context object. The method designed in this paper can make users display and process DICOM images directly by using ordinary Web browser, which makes Web PACS not only have the advantages of B/S model, but also have the advantages of the C/S model. Java Applet is the key for expanding the Web browser's function in Web PACS, which provides a guideline to sharing of medical images.

  1. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  2. SPARX, a new environment for Cryo-EM image processing.

    PubMed

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  3. Open framework for management and processing of multi-modality and multidimensional imaging data for analysis and modelling muscular function

    NASA Astrophysics Data System (ADS)

    García Juan, David; Delattre, Bénédicte M. A.; Trombella, Sara; Lynch, Sean; Becker, Matthias; Choi, Hon Fai; Ratib, Osman

    2014-03-01

    Musculoskeletal disorders (MSD) are becoming a big healthcare economical burden in developed countries with aging population. Classical methods like biopsy or EMG used in clinical practice for muscle assessment are invasive and not accurately sufficient for measurement of impairments of muscular performance. Non-invasive imaging techniques can nowadays provide effective alternatives for static and dynamic assessment of muscle function. In this paper we present work aimed toward the development of a generic data structure for handling n-dimensional metabolic and anatomical data acquired from hybrid PET/MR scanners. Special static and dynamic protocols were developed for assessment of physical and functional images of individual muscles of the lower limb. In an initial stage of the project a manual segmentation of selected muscles was performed on high-resolution 3D static images and subsequently interpolated to full dynamic set of contours from selected 2D dynamic images across different levels of the leg. This results in a full set of 4D data of lower limb muscles at rest and during exercise. These data can further be extended to a 5D data by adding metabolic data obtained from PET images. Our data structure and corresponding image processing extension allows for better evaluation of large volumes of multidimensional imaging data that are acquired and processed to generate dynamic models of the moving lower limb and its muscular function.

  4. Requirements for a documentation of the image manipulation processes within PACS

    NASA Astrophysics Data System (ADS)

    Retter, Klaus; Rienhoff, Otto; Karsten, Ch.; Prince, Hazel E.

    1990-08-01

    This paper discusses to which extent manipulation functions which have been applied to images handled in PACS should be documented. After postulating an increasing amount of postprocessing features on PACS-consoles, legal, educational and medical reasons for a documentation of image manipulation processes are presented. Besides legal necessities, aspects of storage capacity, response time, and potential uses determine the extent of this documentation. Is there a specific kind of manipulation functions which has to be documented generally? Should the physician decide which parts of the various pathways he tries are recorded by the system? To distinguish, for example, between reversible and irreversible functions or between interactive and non-interactive functions is one step towards a solution. Another step is to establish definitions for terms like "raw" and "final" image. The paper systematizes these questions and offers strategic help. The answers will have an important impact on PACS design and functionality.

  5. MEMS-based system and image processing strategy for epiretinal prosthesis.

    PubMed

    Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong

    2015-01-01

    Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.

  6. Laterality in Metaphor Processing: Lack of Evidence from Functional Magnetic Resonance Imaging for the Right Hemisphere Theory

    ERIC Educational Resources Information Center

    Rapp, Alexander M.; Leube, Dirk T.; Erb, Michael; Grodd, Wolfgang; Kircher, Tilo T. J.

    2007-01-01

    We investigated processing of metaphoric sentences using event-related functional magnetic resonance imaging (fMRI). Seventeen healthy subjects (6 female, 11 male) read 60 novel short German sentence pairs with either metaphoric or literal meaning and performed two different tasks: judging the metaphoric content and judging whether the sentence…

  7. Functional Magnetic Resonance Imaging of Story Listening in Adolescents and Young Adults with Down Syndrome: Evidence for Atypical Neurodevelopment

    ERIC Educational Resources Information Center

    Jacola, L. M.; Byars, A. W.; Hickey, F.; Vannest, J.; Holland, S. K.; Schapiro, M. B.

    2014-01-01

    Background: Previous studies have documented differences in neural activation during language processing in individuals with Down syndrome (DS) in comparison with typically developing individuals matched for chronological age. This study used functional magnetic resonance imaging (fMRI) to compare activation during language processing in young…

  8. Optical coherence tomography imaging based on non-harmonic analysis

    NASA Astrophysics Data System (ADS)

    Cao, Xu; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya

    2009-11-01

    A new processing technique called Non-Harmonic Analysis (NHA) is proposed for OCT imaging. Conventional Fourier-Domain OCT relies on the FFT calculation which depends on the window function and length. Axial resolution is counter proportional to the frame length of FFT that is limited by the swept range of the swept source in SS-OCT, or the pixel counts of CCD in SD-OCT degraded in FD-OCT. However, NHA process is intrinsically free from this trade-offs; NHA can resolve high frequency without being influenced by window function or frame length of sampled data. In this study, NHA process is explained and applied to OCT imaging and compared with OCT images based on FFT. In order to validate the benefit of NHA in OCT, we carried out OCT imaging based on NHA with the three different sample of onion-skin,human-skin and pig-eye. The results show that NHA process can realize practical image resolution that is equivalent to 100nm swept range only with less than half-reduced wavelength range.

  9. [Three-dimensional reconstruction of functional brain images].

    PubMed

    Inoue, M; Shoji, K; Kojima, H; Hirano, S; Naito, Y; Honjo, I

    1999-08-01

    We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: 1) routine images by SPM, 2) three-dimensional static images, and 3) three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface model is the most common method of three-dimensional display. However, the volume rendering method may be more effective for imaging regions such as the brain.

  10. An advanced software suite for the processing and analysis of silicon luminescence images

    NASA Astrophysics Data System (ADS)

    Payne, D. N. R.; Vargas, C.; Hameiri, Z.; Wenham, S. R.; Bagnall, D. M.

    2017-06-01

    Luminescence imaging is a versatile characterisation technique used for a broad range of research and industrial applications, particularly for the field of photovoltaics where photoluminescence and electroluminescence imaging is routinely carried out for materials analysis and quality control. Luminescence imaging can reveal a wealth of material information, as detailed in extensive literature, yet these techniques are often only used qualitatively instead of being utilised to their full potential. Part of the reason for this is the time and effort required for image processing and analysis in order to convert image data to more meaningful results. In this work, a custom built, Matlab based software suite is presented which aims to dramatically simplify luminescence image processing and analysis. The suite includes four individual programs which can be used in isolation or in conjunction to achieve a broad array of functionality, including but not limited to, point spread function determination and deconvolution, automated sample extraction, image alignment and comparison, minority carrier lifetime calibration and iron impurity concentration mapping.

  11. In vivo multiphoton kinetic imaging of the toxic effect of carbon tetrachloride on hepatobiliary metabolism.

    PubMed

    Lin, Chih-Ju; Lee, Sheng-Lin; Lee, Hsuan-Shu; Dong, Chen-Yuan

    2018-06-01

    We used intravital multiphoton microscopy to study the recovery of hepatobiliary metabolism following carbon tetrachloride (CCl4) induced hepatotoxicity in mice. The acquired images were processed by a first order kinetic model to generate rate constant resolved images of the mouse liver. We found that with progression of hepatotoxicity, the spatial gradient of hepatic function disappeared. A CCl4-induced damage mechanism involves the compromise of membrane functions, resulting in accumulation of processed 6-carboxyfluorescein molecules. At day 14 following induction, a restoration of the mouse hepatobiliary function was found. Our approach allows the study of the response of hepatic functions to chemical agents in real time and is useful for studying pharmacokinetics of drug molecules through optical microscopic imaging. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  12. Recursive search method for the image elements of functionally defined surfaces

    NASA Astrophysics Data System (ADS)

    Vyatkin, S. I.

    2017-05-01

    This paper touches upon the synthesis of high-quality images in real time and the technique for specifying three-dimensional objects on the basis of perturbation functions. The recursive search method for the image elements of functionally defined objects with the use of graphics processing units is proposed. The advantages of such an approach over the frame-buffer visualization method are shown.

  13. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  14. Functional Imaging of Retinal Photoreceptors and Inner Neurons Using Stimulus-Evoked Intrinsic Optical Signals

    PubMed Central

    Yao, Xin-Cheng; Li, Yi-Chao

    2013-01-01

    Retinal development is a dynamic process both anatomically and functionally. High-resolution imaging and dynamic monitoring of photoreceptors and inner neurons can provide important information regarding the structure and function of the developing retina. In this chapter, we describe intrinsic optical signal (IOS) imaging as a high spatiotemporal resolution method for functional study of living retinal tissues. IOS imaging is based on near infrared (NIR) light detection of stimulus-evoked transient change of inherent optical characteristics of the cells. With no requirement for exogenous biomarkers, IOS imaging is totally noninvasive for functional mapping of stimulus-evoked spatiotemporal dynamics of the photoreceptors and inner retinal neurons. PMID:22688714

  15. Noise removal in extended depth of field microscope images through nonlinear signal processing.

    PubMed

    Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J

    2013-04-01

    Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.

  16. From nociception to pain perception: imaging the spinal and supraspinal pathways

    PubMed Central

    Brooks, Jonathan; Tracey, Irene

    2005-01-01

    Functional imaging techniques have allowed researchers to look within the brain, and revealed the cortical representation of pain. Initial experiments, performed in the early 1990s, revolutionized pain research, as they demonstrated that pain was not processed in a single cortical area, but in several distributed brain regions. Over the last decade, the roles of these pain centres have been investigated and a clearer picture has emerged of the medial and lateral pain system. In this brief article, we review the imaging literature to date that has allowed these advances to be made, and examine the new frontiers for pain imaging research: imaging the brainstem and other structures involved in the descending control of pain; functional and anatomical connectivity studies of pain processing brain regions; imaging models of neuropathic pain-like states; and going beyond the brain to image spinal function. The ultimate goal of such research is to take these new techniques into the clinic, to investigate and provide new remedies for chronic pain sufferers. PMID:16011543

  17. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  18. Light-Field Imaging Toolkit

    NASA Astrophysics Data System (ADS)

    Bolan, Jeffrey; Hall, Elise; Clifford, Chris; Thurow, Brian

    The Light-Field Imaging Toolkit (LFIT) is a collection of MATLAB functions designed to facilitate the rapid processing of raw light field images captured by a plenoptic camera. An included graphical user interface streamlines the necessary post-processing steps associated with plenoptic images. The generation of perspective shifted views and computationally refocused images is supported, in both single image and animated formats. LFIT performs necessary calibration, interpolation, and structuring steps to enable future applications of this technology.

  19. Neural Substrates for Processing Task-Irrelevant Sad Images in Adolescents

    ERIC Educational Resources Information Center

    Wang, Lihong; Huettel, Scott; De Bellis, Michael D.

    2008-01-01

    Neural systems related to cognitive and emotional processing were examined in adolescents using event-related functional magnetic resonance imaging (fMRI). Ten healthy adolescents performed an emotional oddball task. Subjects detected infrequent circles (targets) within a continual stream of phase-scrambled images (standards). Sad and neutral…

  20. An improved image alignment procedure for high-resolution transmission electron microscopy.

    PubMed

    Lin, Fang; Liu, Yan; Zhong, Xiaoyan; Chen, Jianghua

    2010-06-01

    Image alignment is essential for image processing methods such as through-focus exit-wavefunction reconstruction and image averaging in high-resolution transmission electron microscopy. Relative image displacements exist in any experimentally recorded image series due to the specimen drifts and image shifts, hence image alignment for correcting the image displacements has to be done prior to any further image processing. The image displacement between two successive images is determined by the correlation function of the two relatively shifted images. Here it is shown that more accurate image alignment can be achieved by using an appropriate aperture to filter the high-frequency components of the images being aligned, especially for a crystalline specimen with little non-periodic information. For the image series of crystalline specimens with little amorphous, the radius of the filter aperture should be as small as possible, so long as it covers the innermost lattice reflections. Testing with an experimental through-focus series of Si[110] images, the accuracies of image alignment with different correlation functions are compared with respect to the error functions in through-focus exit-wavefunction reconstruction based on the maximum-likelihood method. Testing with image averaging over noisy experimental images from graphene and carbon-nanotube samples, clear and sharp crystal lattice fringes are recovered after applying optimal image alignment. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  2. High resolution image processing on low-cost microcomputers

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1993-01-01

    Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

  3. Neuroimaging of the Functional and Structural Networks Underlying Visuospatial vs. Linguistic Reasoning in High-Functioning Autism

    ERIC Educational Resources Information Center

    Sahyoun, Cherif P.; Belliveau, John W.; Soulieres, Isabelle; Schwartz, Shira; Mody, Maria

    2010-01-01

    High-functioning individuals with autism have been found to favor visuospatial processing in the face of typically poor language abilities. We aimed to examine the neurobiological basis of this difference using functional magnetic resonance imaging and diffusion tensor imaging. We compared 12 children with high functioning autism (HFA) to 12 age-…

  4. How the blind "see" Braille: lessons from functional magnetic resonance imaging.

    PubMed

    Sadato, Norihiro

    2005-12-01

    What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.

  5. Beef assessments using functional magnetic resonance imaging and sensory evaluation.

    PubMed

    Tapp, W N; Davis, T H; Paniukov, D; Brooks, J C; Brashears, M M; Miller, M F

    2017-04-01

    Functional magnetic resonance imaging (fMRI) has been used to unveil how some foods and basic rewards are processed in the human brain. This study evaluated how resting state functional connectivity in regions of the human brain changed after differing qualities of beef steaks were consumed. Functional images of participants (n=8) were collected after eating high or low quality beef steaks on separate days, after consumption a sensory ballot was administered to evaluate consumers' perceptions of tenderness, juiciness, flavor, and overall liking. Imaging data showed that high quality steak samples resulted in greater functional connectivity to the striatum, medial orbitofrontal cortex, and insular cortex at various stages after consumption (P≤0.05). Furthermore, high quality steaks elicited higher sensory ballot scores for each palatability trait (P≤0.01). Together, these results suggest that resting state fMRI may be a useful tool for evaluating the neural process that follows positive sensory experiences such as the enjoyment of high quality beef steaks. Published by Elsevier Ltd.

  6. Blind deconvolution of astronomical images with band limitation determined by optical system parameters

    NASA Astrophysics Data System (ADS)

    Luo, L.; Fan, M.; Shen, M. Z.

    2007-07-01

    Atmospheric turbulence greatly limits the spatial resolution of astronomical images acquired by the large ground-based telescope. The record image obtained from telescope was thought as a convolution result of the object function and the point spread function. The statistic relationship of the images measured data, the estimated object and point spread function was in accord with the Bayes conditional probability distribution, and the maximum-likelihood formulation was found. A blind deconvolution approach based on the maximum-likelihood estimation technique with real optical band limitation constraint is presented for removing the effect of atmospheric turbulence on this class images through the minimization of the convolution error function by use of the conjugation gradient optimization algorithm. As a result, the object function and the point spread function could be estimated from a few record images at the same time by the blind deconvolution algorithm. According to the principle of Fourier optics, the relationship between the telescope optical system parameters and the image band constraint in the frequency domain was formulated during the image processing transformation between the spatial domain and the frequency domain. The convergence of the algorithm was increased by use of having the estimated function variable (also is the object function and the point spread function) nonnegative and the point-spread function band limited. Avoiding Fourier transform frequency components beyond the cut off frequency lost during the image processing transformation when the size of the sampled image data, image spatial domain and frequency domain were the same respectively, the detector element (e.g. a pixels in the CCD) should be less than the quarter of the diffraction speckle diameter of the telescope for acquiring the images on the focal plane. The proposed method can easily be applied to the case of wide field-view turbulent-degraded images restoration because of no using the object support constraint in the algorithm. The performance validity of the method is examined by the computer simulation and the restoration of the real Alpha Psc astronomical image data. The results suggest that the blind deconvolution with the real optical band constraint can remove the effect of the atmospheric turbulence on the observed images and the spatial resolution of the object image can arrive at or exceed the diffraction-limited level.

  7. A Electro-Optical Image Algebra Processing System for Automatic Target Recognition

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick Cyrus

    The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.

  8. ART AND SCIENCE OF IMAGE MAPS.

    USGS Publications Warehouse

    Kidwell, Richard D.; McSweeney, Joseph A.

    1985-01-01

    The visual image of reflected light is influenced by the complex interplay of human color discrimination, spatial relationships, surface texture, and the spectral purity of light, dyes, and pigments. Scientific theories of image processing may not always achieve acceptable results as the variety of factors, some psychological, are in part, unpredictable. Tonal relationships that affect digital image processing and the transfer functions used to transform from the continuous-tone source image to a lithographic image, may be interpreted for an insight of where art and science fuse in the production process. The application of art and science in image map production at the U. S. Geological Survey is illustrated and discussed.

  9. Observations on the effects of image processing functions on fingermark data in the Fourier domain

    NASA Astrophysics Data System (ADS)

    Bramble, Simon K.; Fabrizi, Paola M.

    1995-09-01

    One of the image processing functions used for the enhancement of laten fingermark images is the Fourier transform. This paper describes some effects of spatial resolution, zero-filling and windowing on fingermark data in the Fourier domain. It is shown that with an understanding of the fingermark structure it is possible to determine the approximate prosition of the frequency data in the Fourier domain corresponding to the fingermark image detail. The effect of attenuation of frequency data on a zero-filled image is shown to be different to the same attenuation on a non-zero-filled image. The effects of windowing spatial data on the frequency data are also highlighted and compared with the same data after the application of a Hanning window.

  10. Content-aware dark image enhancement through channel division.

    PubMed

    Rivera, Adin Ramirez; Ryu, Byungyong; Chae, Oksam

    2012-09-01

    The current contrast enhancement algorithms occasionally result in artifacts, overenhancement, and unnatural effects in the processed images. These drawbacks increase for images taken under poor illumination conditions. In this paper, we propose a content-aware algorithm that enhances dark images, sharpens edges, reveals details in textured regions, and preserves the smoothness of flat regions. The algorithm produces an ad hoc transformation for each image, adapting the mapping functions to each image's characteristics to produce the maximum enhancement. We analyze the contrast of the image in the boundary and textured regions, and group the information with common characteristics. These groups model the relations within the image, from which we extract the transformation functions. The results are then adaptively mixed, by considering the human vision system characteristics, to boost the details in the image. Results show that the algorithm can automatically process a wide range of images-e.g., mixed shadow and bright areas, outdoor and indoor lighting, and face images-without introducing artifacts, which is an improvement over many existing methods.

  11. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  12. EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.

    1994-01-01

    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.

  13. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  14. Software organization for a prolog-based prototyping system for machine vision

    NASA Astrophysics Data System (ADS)

    Jones, Andrew C.; Hack, Ralf; Batchelor, Bruce G.

    1996-11-01

    We describe PIP (prolog image processing)--a prototype system for interactive image processing using Prolog, implemented on an Apple Macintosh computer. PIP is the latest in a series of products that the third author has been involved in the implementation of, under the collective title Prolog+. PIP differs from our previous systems in two particularly important respects. The first is that whereas we previously required dedicated image processing hardware, the present system implements image processing routines using software. The second difference is that our present system is hierarchical in structure, where the top level of the hierarchy emulates Prolog+, but there is a flexible infrastructure which supports more sophisticated image manipulation which we will be able to exploit in due course . We discuss the impact of the Apple Macintosh operating system upon the implementation of the image processing functions, and the interface between these functions and the Prolog system. We also explain how the existing set of Prolog+ commands has been implemented. PIP is now nearing maturity, and we will make a version of it generally available in the near future. However, although the represent version of PIP constitutes a complete image processing tool, there are a number of ways in which we are intending to enhance future versions, with a view to added flexibility and efficiency: we discuss these ideas briefly near the end of the present paper.

  15. Thalamic structures and associated cognitive functions: Relations with age and aging.

    PubMed

    Fama, Rosemary; Sullivan, Edith V

    2015-07-01

    The thalamus, with its cortical, subcortical, and cerebellar connections, is a critical node in networks supporting cognitive functions known to decline in normal aging, including component processes of memory and executive functions of attention and information processing. The macrostructure, microstructure, and neural connectivity of the thalamus changes across the adult lifespan. Structural and functional magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) have demonstrated, regional thalamic volume shrinkage and microstructural degradation, with anterior regions generally more compromised than posterior regions. The integrity of selective thalamic nuclei and projections decline with advancing age, particularly those in thalamofrontal, thalamoparietal, and thalamolimbic networks. This review presents studies that assess the relations between age and aging and the structure, function, and connectivity of the thalamus and associated neural networks and focuses on their relations with processes of attention, speed of information processing, and working and episodic memory. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Review of free software tools for image analysis of fluorescence cell micrographs.

    PubMed

    Wiesmann, V; Franz, D; Held, C; Münzenmayer, C; Palmisano, R; Wittenberg, T

    2015-01-01

    An increasing number of free software tools have been made available for the evaluation of fluorescence cell micrographs. The main users are biologists and related life scientists with no or little knowledge of image processing. In this review, we give an overview of available tools and guidelines about which tools the users should use to segment fluorescence micrographs. We selected 15 free tools and divided them into stand-alone, Matlab-based, ImageJ-based, free demo versions of commercial tools and data sharing tools. The review consists of two parts: First, we developed a criteria catalogue and rated the tools regarding structural requirements, functionality (flexibility, segmentation and image processing filters) and usability (documentation, data management, usability and visualization). Second, we performed an image processing case study with four representative fluorescence micrograph segmentation tasks with figure-ground and cell separation. The tools display a wide range of functionality and usability. In the image processing case study, we were able to perform figure-ground separation in all micrographs using mainly thresholding. Cell separation was not possible with most of the tools, because cell separation methods are provided only by a subset of the tools and are difficult to parametrize and to use. Most important is that the usability matches the functionality of a tool. To be usable, specialized tools with less functionality need to fulfill less usability criteria, whereas multipurpose tools need a well-structured menu and intuitive graphical user interface. © 2014 Fraunhofer-Institute for Integrated Circuits IIS Journal of Microscopy © 2014 Royal Microscopical Society.

  17. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  18. Analyzing microtomography data with Python and the scikit-image library.

    PubMed

    Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan

    2017-01-01

    The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.

  19. [Seeking the aetiology of autistic spectrum disorder. Part 2: Functional neuroimaging].

    PubMed

    Bryńska, Anita

    2012-01-01

    Multiple functional imaging techniques help to a better understanding of the neurobiological basis of autism-spectrum disorders (ASD). The early functional imaging studies on ASD focused on task-specific methods related to core symptom domains and explored patterns of activation in response to face processing, theory of mind tasks, language processing and executive function tasks. On the other hand, fMRI research in ASD focused on the development of functional connectivity methods and has provided evidence of alterations in cortical connectivity in ASD and establish autism as a disorder of under-connectivity among the brain regions participating in cortical networks. This atypical functional connectivity in ASD results in inefficiency and poor integration of processing in network connections to achieve task performance. The goal of this review is to summarise the actual neuroimaging functional data and examine their implication for understanding of the neurobiology of ASD.

  20. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  1. Physical Modeling for Processing Geosynchronous Imaging Fourier Transform Spectrometer-Indian Ocean METOC Imager (GIFTS-IOMI) Hyperspectral Data

    DTIC Science & Technology

    2002-09-30

    Physical Modeling for Processing Geosynchronous Imaging Fourier Transform Spectrometer-Indian Ocean METOC Imager ( GIFTS -IOMI) Hyperspectral Data...water quality assessment. OBJECTIVES The objective of this DoD research effort is to develop and demonstrate a fully functional GIFTS - IOMI...environment once GIFTS -IOMI is stationed over the Indian Ocean. The system will provide specialized methods for the characterization of the atmospheric

  2. Application of advanced signal processing techniques to the rectification and registration of spaceborne imagery. [technology transfer, data transmission

    NASA Technical Reports Server (NTRS)

    Caron, R. H.; Rifman, S. S.; Simon, K. W.

    1974-01-01

    The development of an ERTS/MSS image processing system responsive to the needs of the user community is discussed. An overview of the TRW ERTS/MSS processor is presented, followed by a more detailed discussion of image processing functions satisfied by the system. The particular functions chosen for discussion are evolved from advanced signal processing techniques rooted in the areas of communication and control. These examples show how classical aerospace technology can be transferred to solve the more contemporary problems confronting the users of spaceborne imagery.

  3. Left hemispheric dominance of vestibular processing indicates lateralization of cortical functions in rats.

    PubMed

    Best, Christoph; Lange, Elena; Buchholz, Hans-Georg; Schreckenberger, Mathias; Reuss, Stefan; Dieterich, Marianne

    2014-11-01

    Lateralization of cortical functions such as speech dominance, handedness and processing of vestibular information are present not only in humans but also in ontogenetic older species, e.g. rats. In human functional imaging studies, the processing of vestibular information was found to be correlated with the hemispherical dominance as determined by the handedness. It is located mainly within the right hemisphere in right handers and within the left hemisphere in left handers. Since dominance of vestibular processing is unknown in animals, our aim was to study the lateralization of cortical processing in a functional imaging study applying small-animal positron emission tomography (microPET) and galvanic vestibular stimulation in an in vivo rat model. The cortical and subcortical network processing vestibular information could be demonstrated and correlated with data from other animal studies. By calculating a lateralization index as well as flipped region of interest analyses, we found that the vestibular processing in rats follows a strong left hemispheric dominance independent from the "handedness" of the animals. These findings support the idea of an early hemispheric specialization of vestibular cortical functions in ontogenetic older species.

  4. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study

    PubMed Central

    2018-01-01

    Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962

  5. RADC Multi-Dimensional Signal-Processing Research Program.

    DTIC Science & Technology

    1980-09-30

    Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image

  6. 3D receiver function Kirchhoff depth migration image of Cascadia subduction slab weak zone

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Allen, R. M.; Bodin, T.; Tauzin, B.

    2016-12-01

    We have developed a highly computational efficient algorithm of applying 3D Kirchhoff depth migration to telesismic receiver function data. Combine primary PS arrival with later multiple arrivals we are able to reveal a better knowledge about the earth discontinuity structure (transmission and reflection). This method is highly useful compare with traditional CCP method when dipping structure is met during the imaging process, such as subduction slab. We apply our method to the reginal Cascadia subduction zone receiver function data and get a high resolution 3D migration image, for both primary and multiples. The image showed us a clear slab weak zone (slab hole) in the upper plate boundary under Northern California and the whole Oregon. Compare with previous 2D receiver function image from 2D array(CAFE and CASC93), the position of the weak zone shows interesting conherency. This weak zone is also conherent with local seismicity missing and heat rising, which lead us to think about and compare with the ocean plate stucture and the hydralic fluid process during the formation and migration of the subduction slab.

  7. Optical head tracking for functional magnetic resonance imaging using structured light.

    PubMed

    Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D

    2008-07-01

    An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.

  8. Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement

    NASA Astrophysics Data System (ADS)

    Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.

    In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.

  9. Biology and therapy of fibromyalgia. Functional magnetic resonance imaging findings in fibromyalgia

    PubMed Central

    Williams, David A; Gracely, Richard H

    2006-01-01

    Techniques in neuroimaging such as functional magnetic resonance imaging (fMRI) have helped to provide insights into the role of supraspinal mechanisms in pain perception. This review focuses on studies that have applied fMRI in an attempt to gain a better understanding of the mechanisms involved in the processing of pain associated with fibromyalgia. This article provides an overview of the nociceptive system as it functions normally, reviews functional brain imaging methods, and integrates the existing literature utilizing fMRI to study central pain mechanisms in fibromyalgia. PMID:17254318

  10. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  11. High-performance image processing architecture

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick C.

    1992-04-01

    The proposed architecture is a logical design specifically for image processing and other related computations. The design is a hybrid electro-optical concept consisting of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined by an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how elegantly it handles the natural decomposition of algebraic functions into spatially distributed, point-wise operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The logical architecture may take any number of physical forms. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control all the arithmetic and logic operations of the image algebra's generalized matrix product. This is the most powerful fundamental formulation in the algebra, thus allowing a wide range of applications.

  12. Distorted images of one's own body activates the prefrontal cortex and limbic/paralimbic system in young women: a functional magnetic resonance imaging study.

    PubMed

    Kurosaki, Mitsuhaya; Shirao, Naoko; Yamashita, Hidehisa; Okamoto, Yasumasa; Yamawaki, Shigeto

    2006-02-15

    Our aim was to study the gender differences in brain activation upon viewing visual stimuli of distorted images of one's own body. We performed functional magnetic resonance imaging on 11 healthy young men and 11 healthy young women using the "body image tasks" which consisted of fat, real, and thin shapes of the subject's own body. Comparison of the brain activation upon performing the fat-image task versus real-image task showed significant activation of the bilateral prefrontal cortex and left parahippocampal area including the amygdala in the women, and significant activation of the right occipital lobe including the primary and secondary visual cortices in the men. Comparison of brain activation upon performing the thin-image task versus real-image task showed significant activation of the left prefrontal cortex, left limbic area including the cingulate gyrus and paralimbic area including the insula in women, and significant activation of the occipital lobe including the left primary and secondary visual cortices in men. These results suggest that women tend to perceive distorted images of their own bodies by complex cognitive processing of emotion, whereas men tend to perceive distorted images of their own bodies by object visual processing and spatial visual processing.

  13. Image retrieval and processing system version 2.0 development work

    NASA Technical Reports Server (NTRS)

    Slavney, Susan H.; Guinness, Edward A.

    1991-01-01

    The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.

  14. Imaging learning and memory: classical conditioning.

    PubMed

    Schreurs, B G; Alkon, D L

    2001-12-15

    The search for the biological basis of learning and memory has, until recently, been constrained by the limits of technology to classic anatomic and electrophysiologic studies. With the advent of functional imaging, we have begun to delve into what, for many, was a "black box." We review several different types of imaging experiments, including steady state animal experiments that image the functional labeling of fixed tissues, and dynamic human studies based on functional imaging of the intact brain during learning. The data suggest that learning and memory involve a surprising conservation of mechanisms and the integrated networking of a number of structures and processes. Copyright 2001 Wiley-Liss, Inc.

  15. Spline function approximation techniques for image geometric distortion representation. [for registration of multitemporal remote sensor imagery

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1975-01-01

    Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.

  16. Interactive digital image manipulation system

    NASA Technical Reports Server (NTRS)

    Henze, J.; Dezur, R.

    1975-01-01

    The system is designed for manipulation, analysis, interpretation, and processing of a wide variety of image data. LANDSAT (ERTS) and other data in digital form can be input directly into the system. Photographic prints and transparencies are first converted to digital form with an on-line high-resolution microdensitometer. The system is implemented on a Hewlett-Packard 3000 computer with 128 K bytes of core memory and a 47.5 megabyte disk. It includes a true color display monitor, with processing memories, graphics overlays, and a movable cursor. Image data formats are flexible so that there is no restriction to a given set of remote sensors. Conversion between data types is available to provide a basis for comparison of the various data. Multispectral data is fully supported, and there is no restriction on the number of dimensions. In this way multispectral data collected at more than one point in time may simply be treated as a data collected with twice (three times, etc.) the number of sensors. There are various libraries of functions available to the user: processing functions, display functions, system functions, and earth resources applications functions.

  17. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  18. Global high-frequency source imaging accounting for complexity in Green's functions

    NASA Astrophysics Data System (ADS)

    Lambert, V.; Zhan, Z.

    2017-12-01

    The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.

  19. Green's function and image system for the Laplace operator in the prolate spheroidal geometry

    NASA Astrophysics Data System (ADS)

    Xue, Changfeng; Deng, Shaozhong

    2017-01-01

    In the present paper, electrostatic image theory is studied for Green's function for the Laplace operator in the case where the fundamental domain is either the exterior or the interior of a prolate spheroid. In either case, an image system is developed to consist of a point image inside the complement of the fundamental domain and an additional symmetric continuous surface image over a confocal prolate spheroid outside the fundamental domain, although the process of calculating such an image system is easier for the exterior than for the interior Green's function. The total charge of the surface image is zero and its centroid is at the origin of the prolate spheroid. In addition, if the source is on the focal axis outside the prolate spheroid, then the image system of the exterior Green's function consists of a point image on the focal axis and a line image on the line segment between the two focal points.

  20. Concurrent Image Processing Executive (CIPE). Volume 3: User's guide

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.; Kong, Mih-Seh

    1990-01-01

    CIPE (the Concurrent Image Processing Executive) is both an executive which organizes the parameter inputs for hypercube applications and an environment which provides temporary data workspace and simple real-time function definition facilities for image analysis. CIPE provides two types of user interface. The Command Line Interface (CLI) provides a simple command-driven environment allowing interactive function definition and evaluation of algebraic expressions. The menu interface employs a hierarchical screen-oriented menu system where the user is led through a menu tree to any specific application and then given a formatted panel screen for parameter entry. How to initialize the system through the setup function, how to read data into CIPE symbols, how to manipulate and display data through the use of executive functions, and how to run an application in either user interface mode, are described.

  1. Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.

    PubMed

    Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S

    2013-05-01

    Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Automatic Methods in Image Processing and Their Relevance to Map-Making.

    DTIC Science & Technology

    1981-02-11

    23b) and ECfg ) = DC1 1 reIc (5-24) Is an example, let the image function f be white noise so that Cf( ) = s, ,), the Dirac impulse . Then (5-24...based on image and correlator models which describe the behavior of correlation processors under condi- tions of low image contrast or signal-to- noise ...71 Sensor Noise ......................... 74 Self Noise .7.................. 6 Ma chine Noise ................ 81 Fixed Point Processing

  3. Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.

  4. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  5. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Cortical Plasticity and Olfactory Function in Early Blindness

    PubMed Central

    Araneda, Rodrigo; Renier, Laurent A.; Rombaux, Philippe; Cuevas, Isabel; De Volder, Anne G.

    2016-01-01

    Over the last decade, functional brain imaging has provided insight to the maturation processes and has helped elucidate the pathophysiological mechanisms involved in brain plasticity in the absence of vision. In case of congenital blindness, drastic changes occur within the deafferented “visual” cortex that starts receiving and processing non visual inputs, including olfactory stimuli. This functional reorganization of the occipital cortex gives rise to compensatory perceptual and cognitive mechanisms that help blind persons achieve perceptual tasks, leading to superior olfactory abilities in these subjects. This view receives support from psychophysical testing, volumetric measurements and functional brain imaging studies in humans, which are presented here. PMID:27625596

  7. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  8. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  9. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  10. Advantages in functional imaging of the brain.

    PubMed

    Mier, Walter; Mier, Daniela

    2015-01-01

    As neuronal pathologies cause only minor morphological alterations, molecular imaging techniques are a prerequisite for the study of diseases of the brain. The development of molecular probes that specifically bind biochemical markers and the advances of instrumentation have revolutionized the possibilities to gain insight into the human brain organization and beyond this-visualize structure-function and brain-behavior relationships. The review describes the development and current applications of functional brain imaging techniques with a focus on applications in psychiatry. A historical overview of the development of functional imaging is followed by the portrayal of the principles and applications of positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), two key molecular imaging techniques that have revolutionized the ability to image molecular processes in the brain. We conclude that the juxtaposition of PET and fMRI in hybrid PET/MRI scanners enhances the significance of both modalities for research in neurology and psychiatry and might pave the way for a new area of personalized medicine.

  11. funcLAB/G-service-oriented architecture for standards-based analysis of functional magnetic resonance imaging in HealthGrids.

    PubMed

    Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D

    2007-01-01

    Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.

  12. Assessment of body fat based on potential function clustering segmentation of computed tomography images

    NASA Astrophysics Data System (ADS)

    Zhang, Lixin; Lin, Min; Wan, Baikun; Zhou, Yu; Wang, Yizhong

    2005-01-01

    In this paper, a new method of body fat and its distribution testing is proposed based on CT image processing. As it is more sensitive to slight differences in attenuation than standard radiography, CT depicts the soft tissues with better clarity. And body fat has a distinct grayness range compared with its neighboring tissues in a CT image. An effective multi-thresholds image segmentation method based on potential function clustering is used to deal with multiple peaks in the grayness histogram of a CT image. The CT images of abdomens of 14 volunteers with different fatness are processed with the proposed method. Not only can the result of total fat area be got, but also the differentiation of subcutaneous fat from intra-abdominal fat has been identified. The results show the adaptability and stability of the proposed method, which will be a useful tool for diagnosing obesity.

  13. Maritime Domain Awareness: C4I for the 1000 Ship Navy

    DTIC Science & Technology

    2009-12-04

    unit action, provide unit sensed contacts, coordinate unit operations, process unit information, release image , and release contact report, Figure 33...Intelligence Tasking Request Intelligence Summary Release Unit Person Incident Release Unit Vessel Incident Process Intelligence Tasking Release Image ...xi LIST OF FIGURES Figure 1. Functional Problem Sequence Process Flow. ....................................................4 Figure 2. United

  14. Image enhancement software for underwater recovery operations: User's manual

    NASA Astrophysics Data System (ADS)

    Partridge, William J.; Therrien, Charles W.

    1989-06-01

    This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.

  15. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  16. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  17. The Hico Image Processing System: A Web-Accessible Hyperspectral Remote Sensing Toolbox

    NASA Astrophysics Data System (ADS)

    Harris, A. T., III; Goodman, J.; Justice, B.

    2014-12-01

    As the quantity of Earth-observation data increases, the use-case for hosting analytical tools in geospatial data centers becomes increasingly attractive. To address this need, HySpeed Computing and Exelis VIS have developed the HICO Image Processing System, a prototype cloud computing system that provides online, on-demand, scalable remote sensing image processing capabilities. The system provides a mechanism for delivering sophisticated image processing analytics and data visualization tools into the hands of a global user community, who will only need a browser and internet connection to perform analysis. Functionality of the HICO Image Processing System is demonstrated using imagery from the Hyperspectral Imager for the Coastal Ocean (HICO), an imaging spectrometer located on the International Space Station (ISS) that is optimized for acquisition of aquatic targets. Example applications include a collection of coastal remote sensing algorithms that are directed at deriving critical information on water and habitat characteristics of our vulnerable coastal environment. The project leverages the ENVI Services Engine as the framework for all image processing tasks, and can readily accommodate the rapid integration of new algorithms, datasets and processing tools.

  18. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  19. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review

    PubMed Central

    Sheridan, Heather; Reingold, Eyal M.

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise. PMID:29033865

  20. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise.

  1. [Development of a Text-Data Based Learning Tool That Integrates Image Processing and Displaying].

    PubMed

    Shinohara, Hiroyuki; Hashimoto, Takeyuki

    2015-01-01

    We developed a text-data based learning tool that integrates image processing and displaying by Excel. Knowledge required for programing this tool is limited to using absolute, relative, and composite cell references and learning approximately 20 mathematical functions available in Excel. The new tool is capable of resolution translation, geometric transformation, spatial-filter processing, Radon transform, Fourier transform, convolutions, correlations, deconvolutions, wavelet transform, mutual information, and simulation of proton density-, T1-, and T2-weighted MR images. The processed images of 128 x 128 pixels or 256 x 256 pixels are observed directly within Excel worksheets without using any particular image display software. The results of image processing using this tool were compared with those using C language and the new tool was judged to have sufficient accuracy to be practically useful. The images displayed on Excel worksheets were compared with images using binary-data display software. This comparison indicated that the image quality of the Excel worksheets was nearly equal to the latter in visual impressions. Since image processing is performed by using text-data, the process is visible and facilitates making contrasts by using mathematical equations within the program. We concluded that the newly developed tool is adequate as a computer-assisted learning tool for use in medical image processing.

  2. FluoroSim: A Visual Problem-Solving Environment for Fluorescence Microscopy

    PubMed Central

    Quammen, Cory W.; Richardson, Alvin C.; Haase, Julian; Harrison, Benjamin D.; Taylor, Russell M.; Bloom, Kerry S.

    2010-01-01

    Fluorescence microscopy provides a powerful method for localization of structures in biological specimens. However, aspects of the image formation process such as noise and blur from the microscope's point-spread function combine to produce an unintuitive image transformation on the true structure of the fluorescing molecules in the specimen, hindering qualitative and quantitative analysis of even simple structures in unprocessed images. We introduce FluoroSim, an interactive fluorescence microscope simulator that can be used to train scientists who use fluorescence microscopy to understand the artifacts that arise from the image formation process, to determine the appropriateness of fluorescence microscopy as an imaging modality in an experiment, and to test and refine hypotheses of model specimens by comparing the output of the simulator to experimental data. FluoroSim renders synthetic fluorescence images from arbitrary geometric models represented as triangle meshes. We describe three rendering algorithms on graphics processing units for computing the convolution of the specimen model with a microscope's point-spread function and report on their performance. We also discuss several cases where the microscope simulator has been used to solve real problems in biology. PMID:20431698

  3. Electrophoresis gel image processing and analysis using the KODAK 1D software.

    PubMed

    Pizzonia, J

    2001-06-01

    The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.

  4. Effect of the image resolution on the statistical descriptors of heterogeneous media.

    PubMed

    Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime

    2018-02-01

    The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.

  5. Effect of the image resolution on the statistical descriptors of heterogeneous media

    NASA Astrophysics Data System (ADS)

    Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime

    2018-02-01

    The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.

  6. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  7. Combined optimization of image-gathering and image-processing systems for scene feature detection

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.

    1987-01-01

    The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.

  8. Quantitative Image Informatics for Cancer Research (QIICR) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Imaging has enormous untapped potential to improve cancer research through software to extract and process morphometric and functional biomarkers. In the era of non-cytotoxic treatment agents, multi- modality image-guided ablative therapies and rapidly evolving computational resources, quantitative imaging software can be transformative in enabling minimally invasive, objective and reproducible evaluation of cancer treatment response. Post-processing algorithms are integral to high-throughput analysis and fine- grained differentiation of multiple molecular targets.

  9. Sentence processing in the cerebral cortex.

    PubMed

    Sakai, K L; Hashimoto, R; Homae, F

    2001-01-01

    Human language is a unique faculty of the mind. It has been the ultimate mystery throughout the history of neuroscience. Despite many aphasia and functional imaging studies, the exact correlation between cortical language areas and subcomponents of the linguistic system has not been established. One notable drawback is that most functional imaging studies have tested language tasks at the word level, such as lexical decision and word generation tasks, thereby neglecting the syntactic aspects of the language faculty. As proposed by Chomsky, the critical knowledge of language involves universal grammar (UG), which governs the syntactic structure of sentences. In this article, we will review recent advances made by functional neuroimaging studies of language, focusing especially on sentence processing in the cerebral cortex. We also present the recent results of our functional magnetic resonance imaging (fMRI) study intended to identify cortical areas specifically involved in syntactic processing. A study of sentence processing that employs a newly developed technique, optical topography (OT), is also presented. Based on these findings, we propose a modular specialization of Broca's area, Wernicke's area, and the angular gyrus/supramarginal gyrus. The current direction of research in neuroscience is beginning to establish the existence of distinct modules responsible for our knowledge of language.

  10. Sodium 3D COncentration MApping (COMA 3D) using 23Na and proton MRI

    NASA Astrophysics Data System (ADS)

    Truong, Milton L.; Harrington, Michael G.; Schepkin, Victor D.; Chekmenev, Eduard Y.

    2014-10-01

    Functional changes of sodium 3D MRI signals were converted into millimolar concentration changes using an open-source fully automated MATLAB toolbox. These concentration changes are visualized via 3D sodium concentration maps, and they are overlaid over conventional 3D proton images to provide high-resolution co-registration for easy correlation of functional changes to anatomical regions. Nearly 5000/h concentration maps were generated on a personal computer (ca. 2012) using 21.1 T 3D sodium MRI brain images of live rats with spatial resolution of 0.8 × 0.8 × 0.8 mm3 and imaging matrices of 60 × 60 × 60. The produced concentration maps allowed for non-invasive quantitative measurement of in vivo sodium concentration in the normal rat brain as a functional response to migraine-like conditions. The presented work can also be applied to sodium-associated changes in migraine, cancer, and other metabolic abnormalities that can be sensed by molecular imaging. The MATLAB toolbox allows for automated image analysis of the 3D images acquired on the Bruker platform and can be extended to other imaging platforms. The resulting images are presented in a form of series of 2D slices in all three dimensions in native MATLAB and PDF formats. The following is provided: (a) MATLAB source code for image processing, (b) the detailed processing procedures, (c) description of the code and all sub-routines, (d) example data sets of initial and processed data. The toolbox can be downloaded at: http://www.vuiis.vanderbilt.edu/ truongm/COMA3D/.

  11. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study.

    PubMed

    Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek

    2018-04-26

    Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.

  12. Validation of a rapid, semiautomatic image analysis tool for measurement of gastric accommodation and emptying by magnetic resonance imaging

    PubMed Central

    Dixit, Sudeepa; Fox, Mark; Pal, Anupam

    2014-01-01

    Magnetic resonance imaging (MRI) has advantages for the assessment of gastrointestinal structures and functions; however, processing MRI data is time consuming and this has limited uptake to a few specialist centers. This study introduces a semiautomatic image processing system for rapid analysis of gastrointestinal MRI. For assessment of simpler regions of interest (ROI) such as the stomach, the system generates virtual images along arbitrary planes that intersect the ROI edges in the original images. This generates seed points that are joined automatically to form contours on each adjacent two-dimensional image and reconstructed in three dimensions (3D). An alternative thresholding approach is available for rapid assessment of complex structures like the small intestine. For assessment of dynamic gastrointestinal function, such as gastric accommodation and emptying, the initial 3D reconstruction is used as reference to process adjacent image stacks automatically. This generates four-dimensional (4D) reconstructions of dynamic volume change over time. Compared with manual processing, this semiautomatic system reduced the user input required to analyze a MRI gastric emptying study (estimated 100 vs. 10,000 mouse clicks). This analysis was not subject to variation in volume measurements seen between three human observers. In conclusion, the image processing platform presented processed large volumes of MRI data, such as that produced by gastric accommodation and emptying studies, with minimal user input. 3D and 4D reconstructions of the stomach and, potentially, other gastrointestinal organs are produced faster and more accurately than manual methods. This system will facilitate the application of MRI in gastrointestinal research and clinical practice. PMID:25540229

  13. Imaging synthetic aperture radar

    DOEpatents

    Burns, Bryan L.; Cordaro, J. Thomas

    1997-01-01

    A linear-FM SAR imaging radar method and apparatus to produce a real-time image by first arranging the returned signals into a plurality of subaperture arrays, the columns of each subaperture array having samples of dechirped baseband pulses, and further including a processing of each subaperture array to obtain coarse-resolution in azimuth, then fine-resolution in range, and lastly, to combine the processed subapertures to obtain the final fine-resolution in azimuth. Greater efficiency is achieved because both the transmitted signal and a local oscillator signal mixed with the returned signal can be varied on a pulse-to-pulse basis as a function of radar motion. Moreover, a novel circuit can adjust the sampling location and the A/D sample rate of the combined dechirped baseband signal which greatly reduces processing time and hardware. The processing steps include implementing a window function, stabilizing either a central reference point and/or all other points of a subaperture with respect to doppler frequency and/or range as a function of radar motion, sorting and compressing the signals using a standard fourier transforms. The stabilization of each processing part is accomplished with vector multiplication using waveforms generated as a function of radar motion wherein these waveforms may be synthesized in integrated circuits. Stabilization of range migration as a function of doppler frequency by simple vector multiplication is a particularly useful feature of the invention; as is stabilization of azimuth migration by correcting for spatially varying phase errors prior to the application of an autofocus process.

  14. Syntactic Processing in Bilinguals: An fNIRS Study

    ERIC Educational Resources Information Center

    Scherer, Lilian Cristine; Fonseca, Rochele Paz; Amiri, Mahnoush; Adrover-Roig, Daniel; Marcotte, Karine; Giroux, Francine; Senhadji, Noureddine; Benali, Habib; Lesage, Frederic; Ansaldo, Ana Ines

    2012-01-01

    The study of the neural basis of syntactic processing has greatly benefited from neuroimaging techniques. Research on syntactic processing in bilinguals has used a variety of techniques, including mainly functional magnetic resonance imaging (fMRI) and event-related potentials (ERP). This paper reports on a functional near-infrared spectroscopy…

  15. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    NASA Astrophysics Data System (ADS)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  16. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    PubMed Central

    Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter

    2017-01-01

    Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038

  17. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    NASA Astrophysics Data System (ADS)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  18. In-flight edge response measurements for high-spatial-resolution remote sensing systems

    NASA Astrophysics Data System (ADS)

    Blonski, Slawomir; Pagnutti, Mary A.; Ryan, Robert; Zanoni, Vickie

    2002-09-01

    In-flight measurements of spatial resolution were conducted as part of the NASA Scientific Data Purchase Verification and Validation process. Characterization included remote sensing image products with ground sample distance of 1 meter or less, such as those acquired with the panchromatic imager onboard the IKONOS satellite and the airborne ADAR System 5500 multispectral instrument. Final image products were used to evaluate the effects of both the image acquisition system and image post-processing. Spatial resolution was characterized by full width at half maximum of an edge-response-derived line spread function. The edge responses were analyzed using the tilted-edge technique that overcomes the spatial sampling limitations of the digital imaging systems. As an enhancement to existing algorithms, the slope of the edge response and the orientation of the edge target were determined by a single computational process. Adjacent black and white square panels, either painted on a flat surface or deployed as tarps, formed the ground-based edge targets used in the tests. Orientation of the deployable tarps was optimized beforehand, based on simulations of the imaging system. The effects of such factors as acquisition geometry, temporal variability, Modulation Transfer Function compensation, and ground sample distance on spatial resolution were investigated.

  19. Hadamard multimode optical imaging transceiver

    DOEpatents

    Cooke, Bradly J; Guenther, David C; Tiee, Joe J; Kellum, Mervyn J; Olivas, Nicholas L; Weisse-Bernstein, Nina R; Judd, Stephen L; Braun, Thomas R

    2012-10-30

    Disclosed is a method and system for simultaneously acquiring and producing results for multiple image modes using a common sensor without optical filtering, scanning, or other moving parts. The system and method utilize the Walsh-Hadamard correlation detection process (e.g., functions/matrix) to provide an all-binary structure that permits seamless bridging between analog and digital domains. An embodiment may capture an incoming optical signal at an optical aperture, convert the optical signal to an electrical signal, pass the electrical signal through a Low-Noise Amplifier (LNA) to create an LNA signal, pass the LNA signal through one or more correlators where each correlator has a corresponding Walsh-Hadamard (WH) binary basis function, calculate a correlation output coefficient for each correlator as a function of the corresponding WH binary basis function in accordance with Walsh-Hadamard mathematical principles, digitize each of the correlation output coefficient by passing each correlation output coefficient through an Analog-to-Digital Converter (ADC), and performing image mode processing on the digitized correlation output coefficients as desired to produce one or more image modes. Some, but not all, potential image modes include: multi-channel access, temporal, range, three-dimensional, and synthetic aperture.

  20. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    PubMed

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10ms of processing time while the other IPC methods cost 1-5 s in our experiments. The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  1. Tutte polynomial in functional magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    García-Castillón, Marlly V.

    2015-09-01

    Methods of graph theory are applied to the processing of functional magnetic resonance images. Specifically the Tutte polynomial is used to analyze such kind of images. Functional Magnetic Resonance Imaging provide us connectivity networks in the brain which are represented by graphs and the Tutte polynomial will be applied. The problem of computing the Tutte polynomial for a given graph is #P-hard even for planar graphs. For a practical application the maple packages "GraphTheory" and "SpecialGraphs" will be used. We will consider certain diagram which is depicting functional connectivity, specifically between frontal and posterior areas, in autism during an inferential text comprehension task. The Tutte polynomial for the resulting neural networks will be computed and some numerical invariants for such network will be obtained. Our results show that the Tutte polynomial is a powerful tool to analyze and characterize the networks obtained from functional magnetic resonance imaging.

  2. IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java

    PubMed Central

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319

  3. IQM: an extensible and portable open source application for image and signal analysis in Java.

    PubMed

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  4. On an image reconstruction method for ECT

    NASA Astrophysics Data System (ADS)

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  5. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  6. SU-G-IeP3-08: Image Reconstruction for Scanning Imaging System Based On Shape-Modulated Point Spreading Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ruixing; Yang, LV; Xu, Kele

    Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less

  7. 21 CFR 892.2050 - Picture archiving and communications system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...

  8. 21 CFR 892.2050 - Picture archiving and communications system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...

  9. 21 CFR 892.2050 - Picture archiving and communications system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...

  10. 21 CFR 892.2050 - Picture archiving and communications system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...

  11. Technical note: DIRART--A software suite for deformable image registration and adaptive radiotherapy research.

    PubMed

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A

    2011-01-01

    Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-

  12. Processing of food, body and emotional stimuli in anorexia nervosa: a systematic review and meta-analysis of functional magnetic resonance imaging studies.

    PubMed

    Zhu, Yikang; Hu, Xiaochen; Wang, Jijun; Chen, Jue; Guo, Qian; Li, Chunbo; Enck, Paul

    2012-11-01

    The characteristics of the cognitive processing of food, body and emotional information in patients with anorexia nervosa (AN) are debatable. We reviewed functional magnetic resonance imaging studies to assess whether there were consistent neural basis and networks in the studies to date. Searching PubMed, Ovid, Web of Science, The Cochrane Library and Google Scholar between January 1980 and May 2012, we identified 17 relevant studies. Activation likelihood estimation was used to perform a quantitative meta-analysis of functional magnetic resonance imaging studies. For both food stimuli and body stimuli, AN patients showed increased hemodynamic response in the emotion-related regions (frontal, caudate, uncus, insula and temporal) and decreased activation in the parietal region. Although no robust brain activation has been found in response to emotional stimuli, emotion-related neural networks are involved in the processing of food and body stimuli among AN. It suggests that negative emotional arousal is related to cognitive processing bias of food and body stimuli in AN. Copyright © 2012 John Wiley & Sons, Ltd and Eating Disorders Association.

  13. Functional Magnetic Resonance Imaging Clinical Trial of a Dual-Processing Treatment Protocol for Substance-Dependent Adults

    ERIC Educational Resources Information Center

    Matto, Holly C.; Hadjiyane, Maria C.; Kost, Michelle; Marshall, Jennifer; Wiley, Joseph; Strolin-Goltzman, Jessica; Khatiwada, Manish; VanMeter, John W.

    2014-01-01

    Objectives: Empirical evidence suggests substance dependence creates stress system dysregulation which, in turn, may limit the efficacy of verbal-based treatment interventions, as the recovering brain may not be functionally capable of executive level processing. Treatment models that target implicit functioning are necessary. Methods: An RCT was…

  14. Polarization transfer NMR imaging

    DOEpatents

    Sillerud, Laurel O.; van Hulsteyn, David B.

    1990-01-01

    A nuclear magnetic resonance (NMR) image is obtained with spatial information modulated by chemical information. The modulation is obtained through polarization transfer from a first element representing the desired chemical, or functional, information, which is covalently bonded and spin-spin coupled with a second element effective to provide the imaging data. First and second rf pulses are provided at first and second frequencies for exciting the imaging and functional elements, with imaging gradients applied therebetween to spatially separate the nuclei response for imaging. The second rf pulse is applied at a time after the first pulse which is the inverse of the spin coupling constant to select the transfer element nuclei which are spin coupled to the functional element nuclei for imaging. In a particular application, compounds such as glucose, lactate, or lactose, can be labeled with .sup.13 C and metabolic processes involving the compounds can be imaged with the sensitivity of .sup.1 H and the selectivity of .sup.13 C.

  15. Functional Magnetic Resonance Imaging of Cognitive Processing in Young Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Jacola, Lisa M.; Byars, Anna W.; Chalfonte-Evans, Melinda; Schmithorst, Vincent J.; Hickey, Fran; Patterson, Bonnie; Hotze, Stephanie; Vannest, Jennifer; Chiu, Chung-Yiu; Holland, Scott K.; Schapiro, Mark B.

    2011-01-01

    The authors used functional magnetic resonance imaging (fMRI) to investigate neural activation during a semantic-classification/object-recognition task in 13 persons with Down syndrome and 12 typically developing control participants (age range = 12-26 years). A comparison between groups suggested atypical patterns of brain activation for the…

  16. A Genome-Wide Association Study of Amygdala Activation in Youths with and without Bipolar Disorder

    ERIC Educational Resources Information Center

    Liu, Xinmin; Akula, Nirmala; Skup, Martha; Brotman, Melissa A.; Leibenluft, Ellen; McMahon, Francis J.

    2010-01-01

    Objective: Functional magnetic resonance imaging is commonly used to characterize brain activity underlying a variety of psychiatric disorders. A previous functional magnetic resonance imaging study found that amygdala activation during a face-processing task differed between pediatric patients with bipolar disorder (BD) and healthy controls. We…

  17. Intensity-Curvature Measurement Approaches for the Diagnosis of Magnetic Resonance Imaging Brain Tumors.

    PubMed

    Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A

    2015-11-01

    This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional.

  18. Intensity-Curvature Measurement Approaches for the Diagnosis of Magnetic Resonance Imaging Brain Tumors

    PubMed Central

    Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A.

    2015-01-01

    This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional. PMID:26644943

  19. Millimeter wave scattering characteristics and radar cross section measurements of common roadway objects

    NASA Astrophysics Data System (ADS)

    Zoratti, Paul K.; Gilbert, R. Kent; Majewski, Ronald; Ference, Jack

    1995-12-01

    Development of automotive collision warning systems has progressed rapidly over the past several years. A key enabling technology for these systems is millimeter-wave radar. This paper addresses a very critical millimeter-wave radar sensing issue for automotive radar, namely the scattering characteristics of common roadway objects such as vehicles, roadsigns, and bridge overpass structures. The data presented in this paper were collected on ERIM's Fine Resolution Radar Imaging Rotary Platform Facility and processed with ERIM's image processing tools. The value of this approach is that it provides system developers with a 2D radar image from which information about individual point scatterers `within a single target' can be extracted. This information on scattering characteristics will be utilized to refine threat assessment processing algorithms and automotive radar hardware configurations. (1) By evaluating the scattering characteristics identified in the radar image, radar signatures as a function of aspect angle for common roadway objects can be established. These signatures will aid in the refinement of threat assessment processing algorithms. (2) Utilizing ERIM's image manipulation tools, total RCS and RCS as a function of range and azimuth can be extracted from the radar image data. This RCS information will be essential in defining the operational envelope (e.g. dynamic range) within which any radar sensor hardware must be designed.

  20. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  1. Improvement of light penetration based silkworm gender identification with confined regions of interest

    NASA Astrophysics Data System (ADS)

    Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Sa-ngiamsak, Chiranut

    2013-06-01

    Based on our previous work on light penetration-based silkworm gender identification, we find that unwanted optical noises scattering from the surrounding area near the silkworm pupa and the transparent support are sometimes analyzed and misinterpreted leading to incorrect silkworm gender identification. To alleviate this issue, we place a small rectangular hole on a transparent support so that it not only helps the user precisely place the silkworm pupa but also functions as a region of interest (ROI) for blocking unwanted optical noises and for roughly locating the abdomen region in the image for ease of image processing. Apart from the external ROI, we also assign a smaller ROI inside the image in order to remove strong scattering light from all edges of the external ROI and at the same time speed up our image processing operations. With only the external ROI in function, our experiment shows a measured 86% total accuracy in identifying gender of 120 silkworm pupae with a measured average processing time of 38 ms. Combining the external ROI and the image ROI together revamps the total accuracy in identifying the silkworm gender to 95% with a measured faster 18 ms processing time.

  2. Neural processing of negative word stimuli concerning body image in patients with eating disorders: an fMRI study.

    PubMed

    Miyake, Yoshie; Okamoto, Yasumasa; Onoda, Keiichi; Shirao, Naoko; Okamoto, Yuri; Otagaki, Yoko; Yamawaki, Shigeto

    2010-04-15

    Eating disorders (EDs) are associated with abnormalities of body image perception. The aim of the present study was to investigate the functional abnormalities in brain systems during processing of negative words concerning body images in patients with EDs. Brain responses to negative words concerning body images (task condition) and neutral words (control condition) were measured using functional magnetic resonance imaging in 36 patients with EDs (12 with the restricting type anorexia nervosa; AN-R, 12 with the binging-purging type anorexia nervosa; AN-BP, and 12 with bulimia nervosa; BN) and 12 healthy young women. Participants were instructed to select the most negative word from each negative body-image word set and to select the most neutral word from each neutral word set. In the task relative to the control condition, the right amygdala was activated both in patients with AN-R and in patients with AN-BP. The left medial prefrontal cortex (mPFC) was activated both in patients with BN and in patients with AN-BP. It is suggested that these brain activations may be associated with abnormalities of body image perception. Amygdala activation may be involved in fearful emotional processing of negative words concerning body image and strong fears of gaining weight. One possible interpretation of the finding of mPFC activation is that it may reflect an attempt to regulate the emotion invoked by the stimuli. These abnormal brain functions may help provide better accounts of the psychopathological mechanisms underlying EDs. Copyright 2009 Elsevier Inc. All rights reserved.

  3. The laboratory demonstration and signal processing of the inverse synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Gao, Si; Zhang, ZengHui; Xu, XianWen; Yu, WenXian

    2017-10-01

    This paper presents a coherent inverse synthetic-aperture imaging ladar(ISAL)system to obtain high resolution images. A balanced coherent optics system in laboratory is built with binary phase coded modulation transmit waveform which is different from conventional chirp. A whole digital signal processing solution is proposed including both quality phase gradient autofocus(QPGA) algorithm and cubic phase function(CPF) algorithm. Some high-resolution well-focused ISAL images of retro-reflecting targets are shown to validate the concepts. It is shown that high resolution images can be achieved and the influences from vibrations of platform involving targets and radar can be automatically compensated by the distinctive laboratory system and digital signal process.

  4. [Future perspectives for diagnostic imaging in urology: from anatomic and functional to molecular imaging].

    PubMed

    Macis, Giuseppe; Di Giovanni, Silvia; Di Franco, Davide; Bonomo, Lorenzo

    2013-01-01

    The future approach of diagnostic imaging in urology follows the technological progress, which made the visualization of in vivo molecular processes possible. From anatomo-morphological diagnostic imaging and through functional imaging molecular radiology is reached. Based on molecular probes, imaging is aimed at assessing the in vivo molecular processes, their physiology and function at cellular level. The future imaging will investigate the complex tumor functioning as metabolism, aerobic glycolysis in particular, angiogenesis, cell proliferation, metastatic potential, hypoxia, apoptosis and receptors expressed by neoplastic cells. Methods for performing molecular radiology are CT, MRI, PET-CT, PET-MRI, SPECT and optical imaging. Molecular ultrasound combines technological advancement with targeted contrast media based on microbubbles, this allowing the selective registration of microbubble signal while that of stationary tissues is suppressed. An experimental study was carried out where the ultrasound molecular probe BR55 strictly bound to prostate tumor results in strong enhancement in the early phase after contrast, this contrast being maintained in the late phase. This late enhancement is markedly significant for the detection of prostatic cancer foci and to guide the biopsy sampling. The 124I-cG250 molecular antibody which is strictly linked to cellular carbonic anhydrase IX of clear cell renal carcinoma, allows the acquisition of diagnostic PET images of clear cell renal carcinoma without biopsy. This WG-250 (RENCAREX) antibody was used as a therapy in metastatic clear cell renal carcinoma. Future advancements and applications will result in early cancer diagnosis, personalized therapy that will be specific according to the molecular features of cancer and leading to the development of catheter-based multichannel molecular imaging devices for cystoscopy-based molecular imaging diagnosis and intervention.

  5. Reducing the Effects of Background Noise during Auditory Functional Magnetic Resonance Imaging of Speech Processing: Qualitative and Quantitative Comparisons between Two Image Acquisition Schemes and Noise Cancellation

    ERIC Educational Resources Information Center

    Blackman, Graham A.; Hall, Deborah A.

    2011-01-01

    Purpose: The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. Method: Speech and…

  6. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.

  7. High-performance image processing on the desktop

    NASA Astrophysics Data System (ADS)

    Jordan, Stephen D.

    1996-04-01

    The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.

  8. [One decade of functional imaging in schizophrenia research. From visualisation of basic information processing steps to molecular-genetic oriented imaging].

    PubMed

    Tost, H; Meyer-Lindenberg, A; Ruf, M; Demirakça, T; Grimm, O; Henn, F A; Ende, G

    2005-02-01

    Modern neuroimaging techniques such as magnetic resonance imaging (MRI) and positron emission tomography (PET) have contributed tremendously to our current understanding of psychiatric disorders in the context of functional, biochemical and microstructural alterations of the brain. Since the mid-nineties, functional MRI has provided major insights into the neurobiological correlates of signs and symptoms in schizophrenia. The current paper reviews important fMRI studies of the past decade in the domains of motor, visual, auditory, attentional and working memory function. Special emphasis is given to new methodological approaches, such as the visualisation of medication effects and the functional characterisation of risk genes.

  9. Image texture segmentation using a neural network

    NASA Astrophysics Data System (ADS)

    Sayeh, Mohammed R.; Athinarayanan, Ragu; Dhali, Pushpuak

    1992-09-01

    In this paper we use a neural network called the Lyapunov associative memory (LYAM) system to segment image texture into different categories or clusters. The LYAM system is constructed by a set of ordinary differential equations which are simulated on a digital computer. The clustering can be achieved by using a single tuning parameter in the simplest model. Pattern classes are represented by the stable equilibrium states of the system. Design of the system is based on synthesizing two local energy functions, namely, the learning and recall energy functions. Before the implementation of the segmentation process, a Gauss-Markov random field (GMRF) model is applied to the raw image. This application suitably reduces the image data and prepares the texture information for the neural network process. We give a simple image example illustrating the capability of the technique. The GMRF-generated features are also used for a clustering, based on the Euclidean distance.

  10. IDIMS/GEOPAK: Users manual for a geophysical data display and analysis system

    NASA Technical Reports Server (NTRS)

    Libert, J. M.

    1982-01-01

    The application of an existing image analysis system to the display and analysis of geophysical data is described, the potential for expanding the capabilities of such a system toward more advanced computer analytic and modeling functions is investigated. The major features of the IDIMS (Interactive Display and Image Manipulation System) and its applicability for image type analysis of geophysical data are described. Development of a basic geophysical data processing system to permit the image representation, coloring, interdisplay and comparison of geophysical data sets using existing IDIMS functions and to provide for the production of hard copies of processed images was described. An instruction manual and documentation for the GEOPAK subsystem was produced. A training course for personnel in the use of the IDIMS/GEOPAK was conducted. The effectiveness of the current IDIMS/GEOPAK system for geophysical data analysis was evaluated.

  11. A CMOS One-chip Wireless Camera with Digital Image Transmission Function for Capsule Endoscopes

    NASA Astrophysics Data System (ADS)

    Itoh, Shinya; Kawahito, Shoji; Terakawa, Susumu

    This paper presents the design and implementation of a one-chip camera device for capsule endoscopes. This experimental chip integrates functional circuits required for capsule endoscopes and digital image transmission function. The integrated functional blocks include an image array, a timing generator, a clock generator, a voltage regulator, a 10b cyclic A/D converter, and a BPSK modulator. It can be operated autonomously with 3 pins (VDD, GND, and DATAOUT). A prototype image sensor chip which has 320x240 effective pixels was fabricated using 0.25μm CMOS image sensor process and the autonomous imaging was demonstrated. The chip size is 4.84mmx4.34mm. With a 2.0 V power supply, the analog part consumes 950μW and the total power consumption at 2 frames per second (fps) is 2.6mW. Error-free image transmission over a distance of 48cm at 2.5Mbps corresponding to 2fps has been succeeded with inductive coupling.

  12. Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir

    2006-01-01

    Characterization was conducted under the Memorandum of Understanding among Orbital Sciences Corp., ORBIMAGE, Inc., and NASA Applied Sciences Directorate. Acquired five OrbView-3 panchromatic images of the permanent Stennis Space Center edge targets painted on a concrete surface. Each image is available at two processing levels: Georaw and Basic. Georaw is an intermediate image in which individual pixels are aligned by a nominal shift in the along-scan direction to adjust for the staggered layout of the panchromatic detectors along the focal plane array. Georaw images are engineering data and are not delivered to customers. The Basic product includes a cubic interpolation to align the pixels better along the focal plane and to correct for sensor artifacts, such as smile and attitude smoothing. This product retains satellite geometry - no rectification is performed. Processing of the characterized images did not include image sharpening, which is applied by default to OrbView-3 image products delivered by ORBIMAGE to customers. Edge responses were extracted from images of tilted edges in two directions: along-scan and cross-scan. Each edge response was approximated with a superposition of three sigmoidal functions through a nonlinear least-squares curve-fitting. Line Spread Functions (LSF) were derived by differentiation of the analytical approximation. Modulation Transfer Functions (MTF) were obtained after applying the discrete Fourier transform to the LSF.

  13. Mesoscale brain explorer, a flexible python-based image analysis and visualization tool.

    PubMed

    Haupt, Dirk; Vanni, Matthieu P; Bolanos, Federico; Mitelut, Catalin; LeDue, Jeffrey M; Murphy, Tim H

    2017-07-01

    Imaging of mesoscale brain activity is used to map interactions between brain regions. This work has benefited from the pioneering studies of Grinvald et al., who employed optical methods to image brain function by exploiting the properties of intrinsic optical signals and small molecule voltage-sensitive dyes. Mesoscale interareal brain imaging techniques have been advanced by cell targeted and selective recombinant indicators of neuronal activity. Spontaneous resting state activity is often collected during mesoscale imaging to provide the basis for mapping of connectivity relationships using correlation. However, the information content of mesoscale datasets is vast and is only superficially presented in manuscripts given the need to constrain measurements to a fixed set of frequencies, regions of interest, and other parameters. We describe a new open source tool written in python, termed mesoscale brain explorer (MBE), which provides an interface to process and explore these large datasets. The platform supports automated image processing pipelines with the ability to assess multiple trials and combine data from different animals. The tool provides functions for temporal filtering, averaging, and visualization of functional connectivity relations using time-dependent correlation. Here, we describe the tool and show applications, where previously published datasets were reanalyzed using MBE.

  14. Quantitative fluorescence microscopy and image deconvolution.

    PubMed

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.

  15. Image reconstruction: an overview for clinicians.

    PubMed

    Hansen, Michael S; Kellman, Peter

    2015-03-01

    Image reconstruction plays a critical role in the clinical use of magnetic resonance imaging (MRI). The MRI raw data is not acquired in image space and the role of the image reconstruction process is to transform the acquired raw data into images that can be interpreted clinically. This process involves multiple signal processing steps that each have an impact on the image quality. This review explains the basic terminology used for describing and quantifying image quality in terms of signal-to-noise ratio and point spread function. In this context, several commonly used image reconstruction components are discussed. The image reconstruction components covered include noise prewhitening for phased array data acquisition, interpolation needed to reconstruct square pixels, raw data filtering for reducing Gibbs ringing artifacts, Fourier transforms connecting the raw data with image space, and phased array coil combination. The treatment of phased array coils includes a general explanation of parallel imaging as a coil combination technique. The review is aimed at readers with no signal processing experience and should enable them to understand what role basic image reconstruction steps play in the formation of clinical images and how the resulting image quality is described. © 2014 Wiley Periodicals, Inc.

  16. Digital Imaging

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Digital Imaging is the computer processed numerical representation of physical images. Enhancement of images results in easier interpretation. Quantitative digital image analysis by Perceptive Scientific Instruments, locates objects within an image and measures them to extract quantitative information. Applications are CAT scanners, radiography, microscopy in medicine as well as various industrial and manufacturing uses. The PSICOM 327 performs all digital image analysis functions. It is based on Jet Propulsion Laboratory technology, is accurate and cost efficient.

  17. Imaging of cerebrovascular pathology in animal models of Alzheimer's disease

    PubMed Central

    Klohs, Jan; Rudin, Markus; Shimshek, Derya R.; Beckmann, Nicolau

    2014-01-01

    In Alzheimer's disease (AD), vascular pathology may interact with neurodegeneration and thus aggravate cognitive decline. As the relationship between these two processes is poorly understood, research has been increasingly focused on understanding the link between cerebrovascular alterations and AD. This has at last been spurred by the engineering of transgenic animals, which display pathological features of AD and develop cerebral amyloid angiopathy to various degrees. Transgenic models are versatile for investigating the role of amyloid deposition and vascular dysfunction, and for evaluating novel therapeutic concepts. In addition, research has benefited from the development of novel imaging techniques, which are capable of characterizing vascular pathology in vivo. They provide vascular structural read-outs and have the ability to assess the functional consequences of vascular dysfunction as well as to visualize and monitor the molecular processes underlying these pathological alterations. This article focusses on recent in vivo small animal imaging studies addressing vascular aspects related to AD. With the technical advances of imaging modalities such as magnetic resonance, nuclear and microscopic imaging, molecular, functional and structural information related to vascular pathology can now be visualized in vivo in small rodents. Imaging vascular and parenchymal amyloid-β (Aβ) deposition as well as Aβ transport pathways have been shown to be useful to characterize their dynamics and to elucidate their role in the development of cerebral amyloid angiopathy and AD. Structural and functional imaging read-outs have been employed to describe the deleterious affects of Aβ on vessel morphology, hemodynamics and vascular integrity. More recent imaging studies have also addressed how inflammatory processes partake in the pathogenesis of the disease. Moreover, imaging can be pivotal in the search for novel therapies targeting the vasculature. PMID:24659966

  18. Segregating the core computational faculty of human language from working memory.

    PubMed

    Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D

    2009-05-19

    In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically.

  19. The neural basis of functional neuroimaging signal with positron and single-photon emission tomography.

    PubMed

    Sestini, S

    2007-07-01

    Functional imaging techniques such as positron and single-photon emission tomography exploit the relationship between neural activity, energy demand and cerebral blood flow to functionally map the brain. Despite the fact that neurobiological processes are not completely understood, several results have revealed the signals that trigger the metabolic and vascular changes accompanying variations in neural activity. Advances in this field have demonstrated that release of the major excitatory neurotransmitter glutamate initiates diverse signaling processes between neurons, astrocytes and blood perfusion, and that this signaling is crucial for the occurrence of brain imaging signals. Better understanding of the neural sites of energy consumption and the temporal correlation between energy demand, energy consumption and associated cerebrovascular hemodynamics gives novel insight into the potential of these imaging tools in the study of metabolic neurodegenerative disorders.

  20. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  1. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    NASA Astrophysics Data System (ADS)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  2. Investigating Musical Disorders with Diffusion Tensor Imaging: a Comparison of Imaging Parameters

    PubMed Central

    Loui, Psyche; Schlaug, Gottfried

    2009-01-01

    The Arcuate Fasciculus (AF) is a bundle of white matter traditionally thought to be responsible for language function. However, its role in music is not known. Here we investigate the connectivity of the AF using Diffusion Tensor Imaging (DTI) and show that musically tone-deaf individuals, who show impairments in pitch discrimination, have reduced connectivity in the AF relative to musically normal-functioning control subjects. Results were robust to variations in imaging parameters and emphasize the importance of brain connectivity in para-linguistic processes such as music. PMID:19673766

  3. Dorsolateral prefrontal cortex activation during emotional anticipation and neuropsychological performance in posttraumatic stress disorder.

    PubMed

    Aupperle, Robin L; Allard, Carolyn B; Grimes, Erin M; Simmons, Alan N; Flagan, Taru; Behrooznia, Michelle; Cissell, Shadha H; Twamley, Elizabeth W; Thorp, Steven R; Norman, Sonya B; Paulus, Martin P; Stein, Murray B

    2012-04-01

    Posttraumatic stress disorder (PTSD) has been associated with executive or attentional dysfunction and problems in emotion processing. However, it is unclear whether these two domains of dysfunction are related to common or distinct neurophysiological substrates. To examine the hypothesis that greater neuropsychological impairment in PTSD relates to greater disruption in prefrontal-subcortical networks during emotional anticipation. Case-control, cross-sectional study. General community and hospital and community psychiatric clinics. Volunteer sample of 37 women with PTSD related to intimate partner violence and 34 age-comparable healthy control women. We used functional magnetic resonance imaging (fMRI) to examine neural responses during anticipation of negative and positive emotional images. The Clinician-Administered PTSD Scale was used to characterize PTSD symptom severity. The Wechsler Adult Intelligence Scale, Third Edition, Digit Symbol Test, Delis-Kaplan Executive Function System Color-Word Interference Test, and Wisconsin Card Sorting Test were used to characterize neuropsychological performance. Women with PTSD performed worse on complex visuomotor processing speed (Digit Symbol Test) and executive function (Color-Word Interference Inhibition/Switching subtest) measures compared with control subjects. Posttraumatic stress disorder was associated with greater anterior insula and attenuated lateral prefrontal cortex (PFC) activation during emotional anticipation. Greater dorsolateral PFC activation (anticipation of negative images minus anticipation of positive images) was associated with lower PTSD symptom severity and better visuomotor processing speed and executive functioning. Greater medial PFC and amygdala activation related to slower visuomotor processing speed. During emotional anticipation, women with PTSD show exaggerated activation in the anterior insula, a region important for monitoring internal bodily state. Greater dorsolateral PFC response in PTSD patients during emotional anticipation may reflect engagement of cognitive control networks that are beneficial for emotional and cognitive functioning. Novel treatments could be aimed at strengthening the balance between cognitive control (dorsolateral PFC) and affective processing (medial PFC and amygdala) networks to improve overall functioning for PTSD patients.

  4. CNTRICS Imaging Biomarkers Final Task Selection: Long-Term Memory and Reinforcement Learning

    PubMed Central

    Ragland, John D.; Cohen, Neal J.; Cools, Roshan; Frank, Michael J.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    Functional imaging paradigms hold great promise as biomarkers for schizophrenia research as they can detect altered neural activity associated with the cognitive and emotional processing deficits that are so disabling to this patient population. In an attempt to identify the most promising functional imaging biomarkers for research on long-term memory (LTM), the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS) initiative selected “item encoding and retrieval,” “relational encoding and retrieval,” and “reinforcement learning” as key LTM constructs to guide the nomination process. This manuscript reports on the outcome of the third CNTRICS biomarkers meeting in which nominated paradigms in each of these domains were discussed by a review panel to arrive at a consensus on which of the nominated paradigms could be recommended for immediate translational development. After briefly describing this decision process, information is presented from the nominating authors describing the 4 functional imaging paradigms that were selected for immediate development. In addition to describing the tasks, information is provided on cognitive and neural construct validity, sensitivity to behavioral or pharmacological manipulations, availability of animal models, psychometric characteristics, effects of schizophrenia, and avenues for future development. PMID:22102094

  5. Field Ground Truthing Data Collector - a Mobile Toolkit for Image Analysis and Processing

    NASA Astrophysics Data System (ADS)

    Meng, X.

    2012-07-01

    Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1) Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use) and health conditions of ecosystems and environments in the vicinity of the flight field; 2) Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3) Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.

  6. [Brain Organization of the Preparation for Visual Recognition in Preadolescent Children].

    PubMed

    Farber, D A; Kurganskii, A V; Petrenko, N E

    2015-01-01

    The brain organization of the process of preparation for the perception of incomplete images fragmented to different extents. The functional connections of ventrolateral and dorsoventral cortical zones with other zones in 10-11-year-old and 11-12-year-old children were studied at three successive stages of the preparation for the perception of incomplete images. These data were compared with those obtained for adults. In order to reveal the effect of preparatory processes on the image recognition, we also analyzed the regional event-related potentials. In adults, the functional interaction between dorsolateral and ventrolateral prefrontal cortex and other cortical zones of the right hemisphere was found to be enhanced at the stage of waiting for not-yet-recognizable image, while in the left hemisphere the links became stronger shortly before the successful recognition of a stimulus. In children the stage-related changes in functional interactions are similar in both hemispheres, with peak of interaction activity.at the stage preceding the successful recognition. It was found that in 11-12-year-old children the ventrolateral cortex is involved in both preparatory stage and recognition processes to a smaller extent as compared with adults and 10-11-year-old children. At the same time, the group of 11-12-year-old children had more mature pattern of the dorsolateral cortex involvement, which provided more effective recognition of incomplete images in this group as compared with 10-11-year-old children. It is suggested that the features of the brain organization of visual recognition and preceding preparatory processes in 11-12-year-old children are caused by multidirectional effects of sex hormones on the functioning of different zones of the prefrontal cortex at early stages of sexual maturation.

  7. Aphasia

    MedlinePlus

    ... of speech-generating applications on mobile devices like tablets can also provide an alternative way to communicate ... on using advanced imaging methods, such as functional magnetic resonance imaging (fMRI), to explore how language is processed in ...

  8. Using fuzzy fractal features of digital images for the material surface analisys

    NASA Astrophysics Data System (ADS)

    Privezentsev, D. G.; Zhiznyakov, A. L.; Astafiev, A. V.; Pugin, E. V.

    2018-01-01

    Edge detection is an important task in image processing. There are a lot of approaches in this area: Sobel, Canny operators and others. One of the perspective techniques in image processing is the use of fuzzy logic and fuzzy sets theory. They allow us to increase processing quality by representing information in its fuzzy form. Most of the existing fuzzy image processing methods switch to fuzzy sets on very late stages, so this leads to some useful information loss. In this paper, a novel method of edge detection based on fuzzy image representation and fuzzy pixels is proposed. With this approach, we convert the image to fuzzy form on the first step. Different approaches to this conversion are described. Several membership functions for fuzzy pixel description and requirements for their form and view are given. A novel approach to edge detection based on Sobel operator and fuzzy image representation is proposed. Experimental testing of developed method was performed on remote sensing images.

  9. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  10. Color image analysis of contaminants and bacteria transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Daemi, Mohammad F.; Cole, Larry; Dickenson, Eric

    1997-10-01

    Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studied experimentally using a novel fluorescent microscopic imaging technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled color CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitized this way and simultaneous concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant- bacterium interactions are the key to understanding and optimization of these processes.

  11. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  12. Images in Language, Media, and Mind.

    ERIC Educational Resources Information Center

    Fox, Roy F., Ed.

    The essays in this collection discuss the "image" as both product and process. Representing such diverse disciplines as rhetoric, composition, clinical psychology, journalism, photography, communication, education, and sociology, the essays describe how images function and how they are linked with language and explore the role of images…

  13. Space Imagery Enhancement Investigations; Software for Processing Middle Atmosphere Data

    DTIC Science & Technology

    2011-12-19

    SUPPLEMENTARY NOTES 14. ABSTRACT This report summarizes work related to optical superresolution for the ideal incoherent 1D spread function...optical superresolution , incoherent image eigensystem, image registration, multi-frame image reconstruction, deconvolution 16. SECURITY... Superresolution -Related Investigations ............................................................................. 1 2.2.1 Eigensystem Formulations

  14. Potential of PET-MRI for imaging of non-oncologic musculoskeletal disease.

    PubMed

    Kogan, Feliks; Fan, Audrey P; Gold, Garry E

    2016-12-01

    Early detection of musculoskeletal disease leads to improved therapies and patient outcomes, and would benefit greatly from imaging at the cellular and molecular level. As it becomes clear that assessment of multiple tissues and functional processes are often necessary to study the complex pathogenesis of musculoskeletal disorders, the role of multi-modality molecular imaging becomes increasingly important. New positron emission tomography-magnetic resonance imaging (PET-MRI) systems offer to combine high-resolution MRI with simultaneous molecular information from PET to study the multifaceted processes involved in numerous musculoskeletal disorders. In this article, we aim to outline the potential clinical utility of hybrid PET-MRI to these non-oncologic musculoskeletal diseases. We summarize current applications of PET molecular imaging in osteoarthritis (OA), rheumatoid arthritis (RA), metabolic bone diseases and neuropathic peripheral pain. Advanced MRI approaches that reveal biochemical and functional information offer complementary assessment in soft tissues. Additionally, we discuss technical considerations for hybrid PET-MR imaging including MR attenuation correction, workflow, radiation dose, and quantification.

  15. Display management subsystem, version 1: A user's eye view

    NASA Technical Reports Server (NTRS)

    Parker, Dolores

    1986-01-01

    The structure and application functions of the Display Management Subsystem (DMS) are described. The DMS, a subsystem of the Transportable Applications Executive (TAE), was designed to provide a device-independent interface for an image processing and display environment. The system is callable by C and FORTRAN applications, portable to accommodate different image analysis terminals, and easily expandable to meet local needs. Generic applications are also available for performing many image processing tasks.

  16. Functional Imaging and Migraine: New Connections?

    PubMed Central

    Schwedt, Todd J.; Chong, Catherine D.

    2015-01-01

    Purpose of Review Over the last several years, a growing number of brain functional imaging studies have provided insights into mechanisms underlying migraine. This manuscript reviews the recent migraine functional neuroimaging literature and provides recommendations for future studies that will help fill knowledge gaps. Recent Findings Positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies have identified brain regions that might be responsible for mediating the onset of a migraine attack and those associated with migraine symptoms. Enhanced activation of brain regions that facilitate processing of sensory stimuli suggests a mechanism by which migraineurs are hypersensitive to visual, olfactory, and cutaneous stimuli. Resting state functional connectivity MRI studies have identified numerous brain regions and functional networks with atypical functional connectivity in migraineurs, suggesting that migraine is associated with aberrant brain functional organization. Summary fMRI and PET studies that have identified brain regions and brain networks that are atypical in migraine have helped to describe the neurofunctional basis for migraine symptoms. Future studies should compare functional imaging findings in migraine to other headache and pain disorders and should explore the utility of functional imaging data as biomarkers for diagnostic and treatment purposes. PMID:25887764

  17. PI2GIS: processing image to geographical information systems, a learning tool for QGIS

    NASA Astrophysics Data System (ADS)

    Correia, R.; Teodoro, A.; Duarte, L.

    2017-10-01

    To perform an accurate interpretation of remote sensing images, it is necessary to extract information using different image processing techniques. Nowadays, it became usual to use image processing plugins to add new capabilities/functionalities integrated in Geographical Information System (GIS) software. The aim of this work was to develop an open source application to automatically process and classify remote sensing images from a set of satellite input data. The application was integrated in a GIS software (QGIS), automating several image processing steps. The use of QGIS for this purpose is justified since it is easy and quick to develop new plugins, using Python language. This plugin is inspired in the Semi-Automatic Classification Plugin (SCP) developed by Luca Congedo. SCP allows the supervised classification of remote sensing images, the calculation of vegetation indices such as NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) and other image processing operations. When analysing SCP, it was realized that a set of operations, that are very useful in teaching classes of remote sensing and image processing tasks, were lacking, such as the visualization of histograms, the application of filters, different image corrections, unsupervised classification and several environmental indices computation. The new set of operations included in the PI2GIS plugin can be divided into three groups: pre-processing, processing, and classification procedures. The application was tested consider an image from Landsat 8 OLI from a North area of Portugal.

  18. Constraint processing in our extensible language for cooperative imaging system

    NASA Astrophysics Data System (ADS)

    Aoki, Minoru; Murao, Yo; Enomoto, Hajime

    1996-02-01

    The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.

  19. An Approach for Stitching Satellite Images in a Bigdata Mapreduce Framework

    NASA Astrophysics Data System (ADS)

    Sarı, H.; Eken, S.; Sayar, A.

    2017-11-01

    In this study we present a two-step map/reduce framework to stitch satellite mosaic images. The proposed system enable recognition and extraction of objects whose parts falling in separate satellite mosaic images. However this is a time and resource consuming process. The major aim of the study is improving the performance of the image stitching processes by utilizing big data framework. To realize this, we first convert the images into bitmaps (first mapper) and then String formats in the forms of 255s and 0s (second mapper), and finally, find the best possible matching position of the images by a reduce function.

  20. Multiphoton microscopy for the in-situ investigation of cellular processes and integrity in cryopreservation.

    PubMed

    Doerr, Daniel; Stark, Martin; Ehrhart, Friederike; Zimmermann, Heiko; Stracke, Frank

    2009-08-01

    In this study we demonstrate a new noninvasive imaging method to monitor freezing processes in biological samples and to investigate life in the frozen state. It combines a laser scanning microscope with a computer-controlled cryostage. Nearinfrared (NIR) femtosecond laser pulses evoke the fluorescence of endogenous fluorophores and fluorescent labels due to multiphoton absorption.The inherent optical nonlinearity of multiphoton absorption allows 3D fluorescence imaging for optical tomography of frozen biological material in-situ. As an example for functional imaging we use fluorescence lifetime imaging (FLIM) to create images with chemical and physical contrast.

  1. Are Imaging and Lesioning Convergent Methods for Assessing Functional Specialisation? Investigations Using an Artificial Neural Network

    ERIC Educational Resources Information Center

    Thomas, Michael S. C.; Purser, Harry R. M.; Tomlinson, Simon; Mareschal, Denis

    2012-01-01

    This article presents an investigation of the relationship between lesioning and neuroimaging methods of assessing functional specialisation, using synthetic brain imaging (SBI) and lesioning of a connectionist network of past-tense formation. The model comprised two processing "routes": one was a direct route between layers of input and output…

  2. An image processing approach to analyze morphological features of microscopic images of muscle fibers.

    PubMed

    Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; Costa, Luciano da Fontoura; Yang, Zhong

    2014-12-01

    We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Networked vision system using a Prolog controller

    NASA Astrophysics Data System (ADS)

    Batchelor, B. G.; Caton, S. J.; Chatburn, L. T.; Crowther, R. A.; Miller, J. W. V.

    2005-11-01

    Prolog offers a very different style of programming compared to conventional languages; it can define object properties and abstract relationships in a way that Java, C, C++, etc. find awkward. In an accompanying paper, the authors describe how a distributed web-based vision systems can be built using elements that may even be located on different continents. One particular system of this general type is described here. The top-level controller is a Prolog program, which operates one, or more, image processing engines. This type of function is natural to Prolog, since it is able to reason logically using symbolic (non-numeric) data. Although Prolog is not suitable for programming image processing functions directly, it is ideal for analysing the results derived by an image processor. This article describes the implementation of two systems, in which a Prolog program controls several image processing engines, a simple robot, a pneumatic pick-and-place arm), LED illumination modules and a various mains-powered devices.

  4. Binary Color Vision for Industrial Automation.

    DTIC Science & Technology

    1983-02-28

    A . and Kak, A .: Digital Picture Processing. Academic Press, New York, 1976. (17) Connah, D. M . and Fishbourne, C . A .: "The...TEST CHART NATIONAL RUR AU OF STANDAR[l, A - -IA • . . ........ ......... ’ ... ’" ( ( READ INSTR C IN R REPORT DOCUMENTATION PAGE H"’FORE COMPLETN...image is defined by a function of 2-D posi , say I( m ,n), defined at chosen grid points of the image. For a achromatic grey-scale image, the function

  5. Image Processing Research

    DTIC Science & Technology

    1975-09-30

    systems a linear model results in an object f being mappad into an image _ by a point spread function matrix H. Thus with noise j +Hf +n (1) The simplest... linear models for imaging systems are given by space invariant point spread functions (SIPSF) in which case H is block circulant. If the linear model is...Ij,...,k-IM1 is a set of two dimensional indices each distinct and prior to k. Modeling Procedare: To derive the linear predictor (block LP of figure

  6. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  7. Processing Cones: A Computational Structure for Image Analysis.

    DTIC Science & Technology

    1981-12-01

    image analysis applications, referred to as a processing cone, is described and sample algorithms are presented. A fundamental characteristic of the structure is its hierarchical organization into two-dimensional arrays of decreasing resolution. In this architecture, a protypical function is defined on a local window of data and applied uniformly to all windows in a parallel manner. Three basic modes of processing are supported in the cone: reduction operations (upward processing), horizontal operations (processing at a single level) and projection operations (downward

  8. Age-related changes in the activation of the intraparietal sulcus during nonsymbolic magnitude processing: an event-related functional magnetic resonance imaging study.

    PubMed

    Ansari, Daniel; Dhital, Bibek

    2006-11-01

    Numerical magnitude processing is an essential everyday skill. Functional brain imaging studies with human adults have repeatedly revealed that bilateral regions of the intraparietal sulcus are correlated with various numerical and mathematical skills. Surprisingly little, however, is known about the development of these brain representations. In the present study, we used functional neuroimaging to compare the neural correlates of nonsymbolic magnitude judgments between children and adults. Although behavioral performance was similar across groups, in comparison to the group of children the adult participants exhibited greater effects of numerical distance on the left intraparietal sulcus. Our findings are the first to reveal that even the most basic aspects of numerical cognition are subject to age-related changes in functional neuroanatomy. We propose that developmental impairments of number may be associated with atypical specialization of cortical regions underlying magnitude processing.

  9. The social brain in adolescence: Evidence from functional magnetic resonance imaging and behavioural studies

    PubMed Central

    Burnett, Stephanie; Sebastian, Catherine; Kadosh, Kathrin Cohen; Blakemore, Sarah-Jayne

    2015-01-01

    Social cognition is the collection of cognitive processes required to understand and interact with others. The term ‘social brain’ refers to the network of brain regions that underlies these processes. Recent evidence suggests that a number of social cognitive functions continue to develop during adolescence, resulting in age differences in tasks that assess cognitive domains including face processing, mental state inference and responding to peer influence and social evaluation. Concurrently, functional and structural magnetic resonance imaging (MRI) studies show differences between adolescent and adult groups within parts of the social brain. Understanding the relationship between these neural and behavioural observations is a challenge. This review discusses current research findings on adolescent social cognitive development and its functional MRI correlates, then integrates and interprets these findings in the context of hypothesised developmental neurocognitive and neurophysiological mechanisms. PMID:21036192

  10. A new data processing technique for Rayleigh-Taylor instability growth experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Yongteng; Tu, Shaoyong; Miao, Wenyong

    Typical face-on experiments for Rayleigh-Taylor instability study involve the time-resolved radiography of an accelerated foil with line-of-sight of the radiography along the direction of motion. The usual method which derives perturbation amplitudes from the face-on images reverses the actual image transmission procedure, so the obtained results will have a large error in the case of large optical depth. In order to improve the accuracy of data processing, a new data processing technique has been developed to process the face-on images. This technique based on convolution theorem, refined solutions of optical depth can be achieved by solving equations. Furthermore, we discussmore » both techniques for image processing, including the influence of modulation transfer function of imaging system and the backlighter spatial profile. Besides, we use the two methods to the process the experimental results in Shenguang-II laser facility and the comparison shows that the new method effectively improve the accuracy of data processing.« less

  11. A new programming metaphor for image processing procedures

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.

  12. Single image super-resolution via an iterative reproducing kernel Hilbert space method.

    PubMed

    Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu

    2016-11-01

    Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.

  13. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    PubMed

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  14. Moderated histogram equalization, an automatic means of enhancing the contrast in digital light micrographs reversibly.

    PubMed

    Entwistle, A

    2004-06-01

    A means for improving the contrast in the images produced from digital light micrographs is described that requires no intervention by the experimenter: zero-order, scaling, tonally independent, moderated histogram equalization. It is based upon histogram equalization, which often results in digital light micrographs that contain regions that appear to be saturated, negatively biased or very grainy. Here a non-decreasing monotonic function is introduced into the process, which moderates the changes in contrast that are generated. This method is highly effective for all three of the main types of contrast found in digital light micrography: bright objects viewed against a dark background, e.g. fluorescence and dark-ground or dark-field image data sets; bright and dark objects sets against a grey background, e.g. image data sets collected with phase or Nomarski differential interference contrast optics; and darker objects set against a light background, e.g. views of absorbing specimens. Moreover, it is demonstrated that there is a single fixed moderating function, whose actions are independent of the number of elements of image data, which works well with all types of digital light micrographs, including multimodal or multidimensional image data sets. The use of this fixed function is very robust as the appearance of the final image is not altered discernibly when it is applied repeatedly to an image data set. Consequently, moderated histogram equalization can be applied to digital light micrographs as a push-button solution, thereby eliminating biases that those undertaking the processing might have introduced during manual processing. Finally, moderated histogram equalization yields a mapping function and so, through the use of look-up tables, indexes or palettes, the information present in the original data file can be preserved while an image with the improved contrast is displayed on the monitor screen.

  15. Minimisation of Signal Intensity Differences in Distortion Correction Approaches of Brain Magnetic Resonance Diffusion Tensor Imaging.

    PubMed

    Lee, Dong-Hoon; Lee, Do-Wan; Henry, David; Park, Hae-Jin; Han, Bong-Soo; Woo, Dong-Cheol

    2018-04-12

    To evaluate the effects of signal intensity differences between the b0 image and diffusion tensor imaging (DTI) in the image registration process. To correct signal intensity differences between the b0 image and DTI data, a simple image intensity compensation (SIMIC) method, which is a b0 image re-calculation process from DTI data, was applied before the image registration. The re-calculated b0 image (b0 ext ) from each diffusion direction was registered to the b0 image acquired through the MR scanning (b0 nd ) with two types of cost functions and their transformation matrices were acquired. These transformation matrices were then used to register the DTI data. For quantifications, the dice similarity coefficient (DSC) values, diffusion scalar matrix, and quantified fibre numbers and lengths were calculated. The combined SIMIC method with two cost functions showed the highest DSC value (0.802 ± 0.007). Regarding diffusion scalar values and numbers and lengths of fibres from the corpus callosum, superior longitudinal fasciculus, and cortico-spinal tract, only using normalised cross correlation (NCC) showed a specific tendency toward lower values in the brain regions. Image-based distortion correction with SIMIC for DTI data would help in image analysis by accounting for signal intensity differences as one additional option for DTI analysis. • We evaluated the effects of signal intensity differences at DTI registration. • The non-diffusion-weighted image re-calculation process from DTI data was applied. • SIMIC can minimise the signal intensity differences at DTI registration.

  16. Retrieval of long and short lists from long term memory: a functional magnetic resonance imaging study with human subjects.

    PubMed

    Zysset, S; Müller, K; Lehmann, C; Thöne-Otto, A I; von Cramon, D Y

    2001-11-13

    Previous studies have shown that reaction time in an item-recognition task with both short and long lists is a quadratic function of list length. This suggests that either different memory retrieval processes are implied for short and long lists or an adaptive process is involved. An event-related functional magnetic resonance imaging study with nine subjects and list lengths varying between 3 and 18 words was conducted to identify the underlying neuronal structures of retrieval from long and short lists. For the retrieval and processing of word-lists a single fronto-parietal network, including premotor, left prefrontal, left precuneal and left parietal regions, was activated. With increasing list length, no additional regions became involved in retrieving information from long-term memory, suggesting that not necessarily different, but highly adaptive retrieval processes are involved.

  17. How to design a horizontal patient-focused hospital.

    PubMed

    Murphy, E C; Ruflin, P

    1993-05-01

    Work Imaging is an executive information system for analyzing the cost effectiveness and efficiency of work processes and structures in health care. Advanced Work Imaging relational database technology allows managers and employees to take a sample work activities profile organization-wide. This is married to financial and organizational data to produce images of work within and across all functions, departments, and levels. The images are benchmarked against best practice data to provide insight on the quality and cost efficiency of work practice patterns, from individual roles to departmental skill mix to organization-wide service processes.

  18. A simplified and powerful image processing methods to separate Thai jasmine rice and sticky rice varieties

    NASA Astrophysics Data System (ADS)

    Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya

    2018-03-01

    A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.

  19. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  20. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  2. Using normalization 3D model for automatic clinical brain quantative analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping

    2003-05-01

    Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.

  3. A novel configurable VLSI architecture design of window-based image processing method

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  4. Image Processing Research

    DTIC Science & Technology

    1976-09-30

    Estimation and Detection of Images Degraded by Film Grain Noise - Firouz Naderi 200 5. 3 Image Restoration by Spline Functions...given for the choice of this number: (a) Higher order terms correspond to noise in the image and should be ignored. (b) An analytical...expansion ate sufficient to characterize the signal exactly. Results of experiaental evaluation signals containing noise are presented next

  5. Towards real-time diffuse optical tomography for imaging brain functions cooperated with Kalman estimator

    NASA Astrophysics Data System (ADS)

    Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng

    2018-02-01

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.

  6. Extended depth of field imaging for high speed object analysis

    NASA Technical Reports Server (NTRS)

    Frost, Keith (Inventor); Ortyn, William (Inventor); Basiji, David (Inventor); Bauer, Richard (Inventor); Liang, Luchuan (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)

    2011-01-01

    A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.

  7. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    PubMed

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  8. POLYSITE - An interactive package for the selection and refinement of Landsat image training sites

    NASA Technical Reports Server (NTRS)

    Mack, Marilyn J. P.

    1986-01-01

    A versatile multifunction package, POLYSITE, developed for Goddard's Land Analysis System, is described which simplifies the process of interactively selecting and correcting the sites used to study Landsat TM and MSS images. Image switching between the zoomed and nonzoomed image, color and shape cursor change and location display, and bit plane erase or color change, are global functions which are active at all times. Local functions possibly include manipulation of intensive study areas, new site definition, mensuration, and new image copying. The program is illustrated with the example of a full TM maser scene of metropolitan Washington, DC.

  9. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  10. Diminished rostral anterior cingulate activity in response to threat-related events in posttraumatic stress disorder.

    PubMed

    Kim, Minue J; Chey, Jeanyung; Chung, Ain; Bae, Soojeong; Khang, Hyunsoo; Ham, Byungjoo; Yoon, Sujung J; Jeong, Do-Un; Lyoo, In Kyoon

    2008-03-01

    Previous brain imaging studies have reported hyperactivation of the amygdala and hypoactivation of the anterior cingulate in posttraumatic stress disorder (PTSD) patients, which is believed to be an underlying neural mechanism of the PTSD symptoms. The current study specifically focuses on the abnormal activity of the rostral anterior cingulate, using a paradigm which elicits an unexpected processing conflict caused by salient emotional stimuli. Twelve survivors (seven men and five women) of the Taegu subway fire in 2003, who later developed PTSD, agreed to participate in this study. Twelve healthy volunteers (seven men and five women) were recruited for comparison. Functional brain images of all participants were acquired using functional magnetic resonance imaging while performing a same-different judgment task, which was modified to elicit an unexpected emotional processing conflict. PTSD patients, compared to comparison subjects, showed a decreased rostral anterior cingulate functioning when exposed to situations which induce an unexpected emotional processing conflict. Moreover, PTSD symptom severity was negatively correlated to the level of decrease in the rostral anterior cingulate activity. The results of this study provide evidence that the rostral anterior cingulate functioning is impaired in PTSD patients during response-conflict situations that involve emotional stimuli.

  11. Evaluation and testing of image quality of the Space Solar Extreme Ultraviolet Telescope

    NASA Astrophysics Data System (ADS)

    Peng, Jilong; Yi, Zhong; Zhou, Shuhong; Yu, Qian; Hou, Yinlong; Wang, Shanshan

    2018-01-01

    For the space solar extreme ultraviolet telescope, the star point test can not be performed in the x-ray band (19.5nm band) as there is not light source of bright enough. In this paper, the point spread function of the optical system is calculated to evaluate the imaging performance of the telescope system. Combined with the actual processing surface error, such as small grinding head processing and magnetorheological processing, the optical design software Zemax and data analysis software Matlab are used to directly calculate the system point spread function of the space solar extreme ultraviolet telescope. Matlab codes are programmed to generate the required surface error grid data. These surface error data is loaded to the specified surface of the telescope system by using the communication technique of DDE (Dynamic Data Exchange), which is used to connect Zemax and Matlab. As the different processing methods will lead to surface error with different size, distribution and spatial frequency, the impact of imaging is also different. Therefore, the characteristics of the surface error of different machining methods are studied. Combining with its position in the optical system and simulation its influence on the image quality, it is of great significance to reasonably choose the processing technology. Additionally, we have also analyzed the relationship between the surface error and the image quality evaluation. In order to ensure the final processing of the mirror to meet the requirements of the image quality, we should choose one or several methods to evaluate the surface error according to the different spatial frequency characteristics of the surface error.

  12. Neonatal brain resting-state functional connectivity imaging modalities.

    PubMed

    Mohammadi-Nejad, Ali-Reza; Mahmoudzadeh, Mahdi; Hassanpour, Mahlegha S; Wallois, Fabrice; Muzik, Otto; Papadelis, Christos; Hansen, Anne; Soltanian-Zadeh, Hamid; Gelovani, Juri; Nasiriavanaki, Mohammadreza

    2018-06-01

    Infancy is the most critical period in human brain development. Studies demonstrate that subtle brain abnormalities during this state of life may greatly affect the developmental processes of the newborn infants. One of the rapidly developing methods for early characterization of abnormal brain development is functional connectivity of the brain at rest. While the majority of resting-state studies have been conducted using magnetic resonance imaging (MRI), there is clear evidence that resting-state functional connectivity (rs-FC) can also be evaluated using other imaging modalities. The aim of this review is to compare the advantages and limitations of different modalities used for the mapping of infants' brain functional connectivity at rest. In addition, we introduce photoacoustic tomography, a novel functional neuroimaging modality, as a complementary modality for functional mapping of infants' brain.

  13. An Empirical Model of Body Image Disturbance Using Behavioral Principles Found in Functional Analytic Psychotherapy and Acceptance and Commitment Therapy

    ERIC Educational Resources Information Center

    Callaghan, Glenn M.; Duenas, Julissa A.; Nadeau, Sarah E.; Darrow, Sabrina M.; Van der Merwe, Jessica; Misko, Jennifer

    2012-01-01

    The literature examining body image disturbance and Body Dysmorphic Disorder (BDD) is fraught with competing theoretical constructions of the etiology and nosology of these problems. Recent studies on various forms of psychopathology suggest that intrapersonal processes, including experiential avoidance, and interpersonal processes such as…

  14. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  15. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    PubMed Central

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  16. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption.

    PubMed

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-29

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  17. Fast Image Subtraction Using Multi-cores and GPUs

    NASA Astrophysics Data System (ADS)

    Hartung, Steven; Shukla, H.

    2013-01-01

    Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.

  18. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  19. Imaging immune response of skin mast cells in vivo with two-photon microscopy

    NASA Astrophysics Data System (ADS)

    Li, Chunqiang; Pastila, Riikka K.; Lin, Charles P.

    2012-02-01

    Intravital multiphoton microscopy has provided insightful information of the dynamic process of immune cells in vivo. However, the use of exogenous labeling agents limits its applications. There is no method to perform functional imaging of mast cells, a population of innate tissue-resident immune cells. Mast cells are widely recognized as the effector cells in allergy. Recently their roles as immunoregulatory cells in certain innate and adaptive immune responses are being actively investigated. Here we report in vivo mouse skin mast cells imaging with two-photon microscopy using endogenous tryptophan as the fluorophore. We studied the following processes. 1) Mast cells degranulation, the first step in the mast cell activation process in which the granules are released into peripheral tissue to trigger downstream reactions. 2) Mast cell reconstitution, a procedure commonly used to study mast cells functioning by comparing the data from wild type mice, mast cell-deficient mice, and mast-cell deficient mice reconstituted with bone marrow-derived mast cells (BMMCs). Imaging the BMMCs engraftment in tissue reveals the mast cells development and the efficiency of BMMCs reconstitution. We observed the reconstitution process for 6 weeks in the ear skin of mast cell-deficient Kit wsh/ w-sh mice by two-photon imaging. Our finding is the first instance of imaging mast cells in vivo with endogenous contrast.

  20. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  1. Three dimensional single molecule localization using a phase retrieved pupilfunction

    PubMed Central

    Liu, Sheng; Kromann, Emil B.; Krueger, Wesley D.; Bewersdorf, Joerg; Lidke, Keith A.

    2013-01-01

    Localization-based superresolution imaging is dependent on finding the positions of individualfluorophores in a sample by fitting the observed single-molecule intensity pattern to the microscopepoint spread function (PSF). For three-dimensional imaging, system-specific aberrations of theoptical system can lead to inaccurate localizations when the PSF model does not account for theseaberrations. Here we describe the use of phase-retrieved pupil functions to generate a more accuratePSF and therefore more accurate 3D localizations. The complex-valued pupil function containsinformation about the system-specific aberrations and can thus be used to generate the PSF forarbitrary defocus. Further, it can be modified to include depth dependent aberrations. We describethe phase retrieval process, the method for including depth dependent aberrations, and a fastfitting algorithm using graphics processing units. The superior localization accuracy of the pupilfunction generated PSF is demonstrated with dual focal plane 3D superresolution imaging ofbiological structures. PMID:24514501

  2. Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.

    NASA Astrophysics Data System (ADS)

    Stossel, Bryan Joseph

    1995-01-01

    Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.

  3. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  4. Quantitative analysis of cardiovascular MR images.

    PubMed

    van der Geest, R J; de Roos, A; van der Wall, E E; Reiber, J H

    1997-06-01

    The diagnosis of cardiovascular disease requires the precise assessment of both morphology and function. Nearly all aspects of cardiovascular function and flow can be quantified nowadays with fast magnetic resonance (MR) imaging techniques. Conventional and breath-hold cine MR imaging allow the precise and highly reproducible assessment of global and regional left ventricular function. During the same examination, velocity encoded cine (VEC) MR imaging provides measurements of blood flow in the heart and great vessels. Quantitative image analysis often still relies on manual tracing of contours in the images. Reliable automated or semi-automated image analysis software would be very helpful to overcome the limitations associated with the manual and tedious processing of the images. Recent progress in MR imaging of the coronary arteries and myocardial perfusion imaging with contrast media, along with the further development of faster imaging sequences, suggest that MR imaging could evolve into a single technique ('one stop shop') for the evaluation of many aspects of heart disease. As a result, it is very likely that the need for automated image segmentation and analysis software algorithms will further increase. In this paper the developments directed towards the automated image analysis and semi-automated contour detection for cardiovascular MR imaging are presented.

  5. Process simulation in digital camera system

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  6. Medication Overuse Headache: Pathophysiological Insights from Structural and Functional Brain MRI Research.

    PubMed

    Schwedt, Todd J; Chong, Catherine D

    2017-07-01

    Research imaging of brain structure and function has helped to elucidate the pathophysiology of medication overuse headache (MOH). This is a narrative review of imaging research studies that have investigated brain structural and functional alterations associated with MOH. Studies included in this review have investigated abnormal structure and function of pain processing regions in people with MOH, functional patterns that might predispose individuals to development of MOH, similarity of brain functional patterns in patients with MOH to those found in people with addiction, brain structure that could predict headache improvement following discontinuation of the overused medication, and changes in brain structure and function after discontinuation of medication overuse. MOH is associated with atypical structure and function of brain regions responsible for pain processing as well as brain regions that are commonly implicated in addiction. Several studies have shown "normalization" of structure and function in pain processing regions following discontinuation of the overused medication and resolution of MOH. However, some of the abnormalities in regions also implicated in addiction tend to persist following discontinuation of the overused medication, suggesting that they are a brain trait that predisposes certain individuals to medication overuse and MOH. © 2017 American Headache Society.

  7. GUI for Coordinate Measurement of an Image for the Estimation of Geometric Distortion of an Opto-electronic Display System

    NASA Astrophysics Data System (ADS)

    Saini, Surender Singh; Sardana, Harish Kumar; Pattnaik, Shyam Sundar

    2017-06-01

    Conventional image editing software in combination with other techniques are not only difficult to apply to an image but also permits a user to perform some basic functions one at a time. However, image processing algorithms and photogrammetric systems are developed in the recent past for real-time pattern recognition applications. A graphical user interface (GUI) is developed which can perform multiple functions simultaneously for the analysis and estimation of geometric distortion in an image with reference to the corresponding distorted image. The GUI measure, record, and visualize the performance metric of X/Y coordinates of one image over the other. The various keys and icons provided in the utility extracts the coordinates of distortion free reference image and the image with geometric distortion. The error between these two corresponding points gives the measure of distortion and also used to evaluate the correction parameters for image distortion. As the GUI interface minimizes human interference in the process of geometric correction, its execution just requires use of icons and keys provided in the utility; this technique gives swift and accurate results as compared to other conventional methods for the measurement of the X/Y coordinates of an image.

  8. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    NASA Astrophysics Data System (ADS)

    Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.

    2006-10-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.

  9. A Functional Magnetic Resonance Imaging Study of Foreign-Language Vocabulary Learning Enhanced by Phonological Rehearsal: The Role of the Right Cerebellum and Left Fusiform Gyrus

    ERIC Educational Resources Information Center

    Makita, Kai; Yamazaki, Mika; Tanabe, Hiroki C.; Koike, Takahiko; Kochiyama, Takanori; Yokokawa, Hirokazu; Yoshida, Haruyo; Sadato, Norihiro

    2013-01-01

    Psychological research suggests that foreign-language vocabulary acquisition recruits the phonological loop for verbal working memory. To depict the neural underpinnings and shed light on the process of foreign language learning, we conducted functional magnetic resonance imaging of Japanese participants without previous exposure to the Uzbek…

  10. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  11. Comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Sullivan, Malcolm N.; Chan, Kam Wai Clifford; Boyd, Robert W.

    2010-11-15

    We present a theoretical comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging. We first calculate the signal-to-noise ratio of each process in terms of its controllable experimental conditions. We show that a key distinction is that a thermal ghost image always resides on top of a large background; the fluctuations in this background constitutes an intrinsic noise source for thermal ghost imaging. In contrast, there is a negligible intrinsic background to a quantum ghost image. However, for practical reasons involving achievable illumination levels, acquisition times for thermal ghost images are often much shorter than those for quantummore » ghost images. We provide quantitative predictions for the conditions under which each process provides superior performance. Our conclusion is that each process can provide useful functionality, although under complementary conditions.« less

  12. Segregating the core computational faculty of human language from working memory

    PubMed Central

    Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D.

    2009-01-01

    In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically. PMID:19416819

  13. Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)

    2007-01-01

    A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.

  14. Fuzzy geometry, entropy, and image information

    NASA Technical Reports Server (NTRS)

    Pal, Sankar K.

    1991-01-01

    Presented here are various uncertainty measures arising from grayness ambiguity and spatial ambiguity in an image, and their possible applications as image information measures. Definitions are given of an image in the light of fuzzy set theory, and of information measures and tools relevant for processing/analysis e.g., fuzzy geometrical properties, correlation, bound functions and entropy measures. Also given is a formulation of algorithms along with management of uncertainties for segmentation and object extraction, and edge detection. The output obtained here is both fuzzy and nonfuzzy. Ambiguity in evaluation and assessment of membership function are also described.

  15. Study on Over-Sampling for Imager

    NASA Technical Reports Server (NTRS)

    Kigawa, Seiichiro; Sullivan, Pamela C.

    1998-01-01

    This report describes the potential improvement of the effective ground resolution of MTSAT (Multi-functional Transport Satellite) Imager. The IFOV (Instantaneous Field of View) of MTSAT Imager is 4 km for infrared and 1 km visible. A combination of some images acquired by the MTSAT Imager could generate 2 km-latticed infrared images. Furthermore, it is possible to generate an effective 2 km IFOV image by the enhancement of the 2 km-latticed image using Digital Signal Processing. This report also mentions the on-orbit demonstration of this concept.

  16. Processing techniques for digital sonar images from GLORIA.

    USGS Publications Warehouse

    Chavez, P.S.

    1986-01-01

    Image processing techniques have been developed to handle data from one of the newest members of the remote sensing family of digital imaging systems. This paper discusses software to process data collected by the GLORIA (Geological Long Range Inclined Asdic) sonar imaging system, designed and built by the Institute of Oceanographic Sciences (IOS) in England, to correct for both geometric and radiometric distortions that exist in the original 'raw' data. Preprocessing algorithms that are GLORIA-specific include corrections for slant-range geometry, water column offset, aspect ratio distortion, changes in the ship's velocity, speckle noise, and shading problems caused by the power drop-off which occurs as a function of range.-from Author

  17. Finite grade pheromone ant colony optimization for image segmentation

    NASA Astrophysics Data System (ADS)

    Yuanjing, F.; Li, Y.; Liangjun, K.

    2008-06-01

    By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.

  18. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images.

  19. Color image encryption by using Yang-Gu mixture amplitude-phase retrieval algorithm in gyrator transform domain and two-dimensional Sine logistic modulation map

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Liu, Benqing; Wang, Qiang; Li, Ye; Liang, Junli

    2015-12-01

    A color image encryption scheme is proposed based on Yang-Gu mixture amplitude-phase retrieval algorithm and two-coupled logistic map in gyrator transform domain. First, the color plaintext image is decomposed into red, green and blue components, which are scrambled individually by three random sequences generated by using the two-dimensional Sine logistic modulation map. Second, each scrambled component is encrypted into a real-valued function with stationary white noise distribution in the iterative amplitude-phase retrieval process in the gyrator transform domain, and then three obtained functions are considered as red, green and blue channels to form the color ciphertext image. Obviously, the ciphertext image is real-valued function and more convenient for storing and transmitting. In the encryption and decryption processes, the chaotic random phase mask generated based on logistic map is employed as the phase key, which means that only the initial values are used as private key and the cryptosystem has high convenience on key management. Meanwhile, the security of the cryptosystem is enhanced greatly because of high sensitivity of the private keys. Simulation results are presented to prove the security and robustness of the proposed scheme.

  20. Binary partition tree analysis based on region evolution and its application to tree simplification.

    PubMed

    Lu, Huihai; Woods, John C; Ghanbari, Mohammed

    2007-04-01

    Pyramid image representations via tree structures are recognized methods for region-based image analysis. Binary partition trees can be applied which document the merging process with small details found at the bottom levels and larger ones close to the root. Hindsight of the merging process is stored within the tree structure and provides the change histories of an image property from the leaf to the root node. In this work, the change histories are modelled by evolvement functions and their second order statistics are analyzed by using a knee function. Knee values show the reluctancy of each merge. We have systematically formulated these findings to provide a novel framework for binary partition tree analysis, where tree simplification is demonstrated. Based on an evolvement function, for each upward path in a tree, the tree node associated with the first reluctant merge is considered as a pruning candidate. The result is a simplified version providing a reduced solution space and still complying with the definition of a binary tree. The experiments show that image details are preserved whilst the number of nodes is dramatically reduced. An image filtering tool also results which preserves object boundaries and has applications for segmentation.

  1. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.

  2. A Model-Based Approach for Microvasculature Structure Distortion Correction in Two-Photon Fluorescence Microscopy Images

    PubMed Central

    Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh

    2015-01-01

    SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257

  3. A FUNCTIONAL NEUROIMAGING INVESTIGATION OF THE ROLES OF STRUCTURAL COMPLEXITY AND TASK-DEMAND DURING AUDITORY SENTENCE PROCESSING

    PubMed Central

    Love, Tracy; Haist, Frank; Nicol, Janet; Swinney, David

    2009-01-01

    Using functional magnetic resonance imaging (fMRI), this study directly examined an issue that bridges the potential language processing and multi-modal views of the role of Broca’s area: the effects of task-demands in language comprehension studies. We presented syntactically simple and complex sentences for auditory comprehension under three different (differentially complex) task-demand conditions: passive listening, probe verification, and theme judgment. Contrary to many language imaging findings, we found that both simple and complex syntactic structures activated left inferior frontal cortex (L-IFC). Critically, we found activation in these frontal regions increased together with increased task-demands. Specifically, tasks that required greater manipulation and comparison of linguistic material recruited L-IFC more strongly; independent of syntactic structure complexity. We argue that much of the presumed syntactic effects previously found in sentence imaging studies of L-IFC may, among other things, reflect the tasks employed in these studies and that L-IFC is a region underlying mnemonic and other integrative functions, on which much language processing may rely. PMID:16881268

  4. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  5. Physical Modeling for Processing Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Hyperspectral Data

    DTIC Science & Technology

    2003-09-30

    Physical Modeling for Processing Geosynchronous Imaging Fourier Transform Spectrometer ( GIFTS ) Hyperspectral Data Dr. Allen H.-L. Huang...ssec.wisc.edu Award Number: N000140110850 Grant Number: 144KE70 http://www.ssec.wisc.edu/ gifts /navy/ LONG-TERM GOALS This Office of Naval...objective of this DoD research effort is to develop and demonstrate a fully functional GIFTS hyperspectral data processing system with the potential for a

  6. Research of flaw image collecting and processing technology based on multi-baseline stereo imaging

    NASA Astrophysics Data System (ADS)

    Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan

    2008-03-01

    Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.

  7. The impact of functional imaging on radiation medicine.

    PubMed

    Sharma, Nidhi; Neumann, Donald; Macklis, Roger

    2008-09-15

    Radiation medicine has previously utilized planning methods based primarily on anatomic and volumetric imaging technologies such as CT (Computerized Tomography), ultrasound, and MRI (Magnetic Resonance Imaging). In recent years, it has become apparent that a new dimension of non-invasive imaging studies may hold great promise for expanding the utility and effectiveness of the treatment planning process. Functional imaging such as PET (Positron Emission Tomography) studies and other nuclear medicine based assays are beginning to occupy a larger place in the oncology imaging world. Unlike the previously mentioned anatomic imaging methodologies, functional imaging allows differentiation between metabolically dead and dying cells and those which are actively metabolizing. The ability of functional imaging to reproducibly select viable and active cell populations in a non-invasive manner is now undergoing validation for many types of tumor cells. Many histologic subtypes appear amenable to this approach, with impressive sensitivity and selectivity reported. For clinical radiation medicine, the ability to differentiate between different levels and types of metabolic activity allows the possibility of risk based focal treatments in which the radiation doses and fields are more tightly connected to the perceived risk of recurrence or progression at each location. This review will summarize many of the basic principles involved in the field of functional PET imaging for radiation oncology planning and describe some of the major relevant published data behind this expanding trend.

  8. A portable high-definition electronic endoscope based on embedded system

    NASA Astrophysics Data System (ADS)

    Xu, Guang; Wang, Liqiang; Xu, Jin

    2012-11-01

    This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.

  9. Performance assessment of multi-frequency processing of ICU chest images for enhanced visualization of tubes and catheters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.

    2008-03-01

    An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.

  10. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  11. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  12. The possibilities of improvement in the sensitivity of cancer fluorescence diagnostics by computer image processing

    NASA Astrophysics Data System (ADS)

    Ledwon, Aleksandra; Bieda, Robert; Kawczyk-Krupka, Aleksandra; Polanski, Andrzej; Wojciechowski, Konrad; Latos, Wojciech; Sieron-Stoltny, Karolina; Sieron, Aleksander

    2008-02-01

    Background: Fluorescence diagnostics uses the ability of tissues to fluoresce after exposition to a specific wavelength of light. The change in fluorescence between normal and progression to cancer allows to see early cancer and precancerous lesions often missed by white light. Aim: To improve by computer image processing the sensitivity of fluorescence images obtained during examination of skin, oral cavity, vulva and cervix lesions, during endoscopy, cystoscopy and bronchoscopy using Xillix ONCOLIFE. Methods: Function of image f(x,y):R2 --> R 3 was transformed from original color space RGB to space in which vector of 46 values refers to every point labeled by defined xy-coordinates- f(x,y):R2 --> R 46. By means of Fisher discriminator vector of attributes of concrete point analalyzed in the image was reduced according to two defined classes defined as pathologic areas (foreground) and healthy areas (background). As a result the highest four fisher's coefficients allowing the greatest separation between points of pathologic (foreground) and healthy (background) areas were chosen. In this way new function f(x,y):R2 --> R 4 was created in which point x,y corresponds with vector Y, H, a*, c II. In the second step using Gaussian Mixtures and Expectation-Maximisation appropriate classificator was constructed. This classificator enables determination of probability that the selected pixel of analyzed image is a pathologically changed point (foreground) or healthy one (background). Obtained map of probability distribution was presented by means of pseudocolors. Results: Image processing techniques improve the sensitivity, quality and sharpness of original fluorescence images. Conclusion: Computer image processing enables better visualization of suspected areas examined by means of fluorescence diagnostics.

  13. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  14. Super-resolution photon-efficient imaging by nanometric double-helix point spread function localization of emitters (SPINDLE)

    PubMed Central

    Grover, Ginni; DeLuca, Keith; Quirin, Sean; DeLuca, Jennifer; Piestun, Rafael

    2012-01-01

    Super-resolution imaging with photo-activatable or photo-switchable probes is a promising tool in biological applications to reveal previously unresolved intra-cellular details with visible light. This field benefits from developments in the areas of molecular probes, optical systems, and computational post-processing of the data. The joint design of optics and reconstruction processes using double-helix point spread functions (DH-PSF) provides high resolution three-dimensional (3D) imaging over a long depth-of-field. We demonstrate for the first time a method integrating a Fisher information efficient DH-PSF design, a surface relief optical phase mask, and an optimal 3D localization estimator. 3D super-resolution imaging using photo-switchable dyes reveals the 3D microtubule network in mammalian cells with localization precision approaching the information theoretical limit over a depth of 1.2 µm. PMID:23187521

  15. Understanding the optics to aid microscopy image segmentation.

    PubMed

    Yin, Zhaozheng; Li, Kang; Kanade, Takeo; Chen, Mei

    2010-01-01

    Image segmentation is essential for many automated microscopy image analysis systems. Rather than treating microscopy images as general natural images and rushing into the image processing warehouse for solutions, we propose to study a microscope's optical properties to model its image formation process first using phase contrast microscopy as an exemplar. It turns out that the phase contrast imaging system can be relatively well explained by a linear imaging model. Using this model, we formulate a quadratic optimization function with sparseness and smoothness regularizations to restore the "authentic" phase contrast images that directly correspond to specimen's optical path length without phase contrast artifacts such as halo and shade-off. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on two sequences with thousands of cells captured over several days.

  16. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  17. Development of a Model for Measuring Scientific Processing Skills Based on Brain-Imaging Technology: Focused on the Experimental Design Process

    ERIC Educational Resources Information Center

    Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju

    2014-01-01

    The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…

  18. IBIS integrated biological imaging system: electron micrograph image-processing software running on Unix workstations.

    PubMed

    Flifla, M J; Garreau, M; Rolland, J P; Coatrieux, J L; Thomas, D

    1992-12-01

    'IBIS' is a set of computer programs concerned with the processing of electron micrographs, with particular emphasis on the requirements for structural analyses of biological macromolecules. The software is written in FORTRAN 77 and runs on Unix workstations. A description of the various functions and the implementation mode is given. Some examples illustrate the user interface.

  19. Emotional responses associated with self-face processing in individuals with autism spectrum disorders: an fMRI study.

    PubMed

    Morita, Tomoyo; Kosaka, Hirotaka; Saito, Daisuke N; Ishitobi, Makoto; Munesue, Toshio; Itakura, Shoji; Omori, Masao; Okazawa, Hidehiko; Wada, Yuji; Sadato, Norihiro

    2012-01-01

    Individuals with autism spectrum disorders (ASD) show impaired emotional responses to self-face processing, but the underlying neural bases are unclear. Using functional magnetic resonance imaging, we investigated brain activity when 15 individuals with high-functioning ASD and 15 controls rated the photogenicity of self-face images and photographs of others' faces. Controls showed a strong correlation between photogenicity ratings and extent of embarrassment evoked by self-face images; this correlation was weaker among ASD individuals, indicating a decoupling between the cognitive evaluation of self-face images and emotional responses. Individuals with ASD demonstrated relatively low self-related activity in the posterior cingulate cortex (PCC), which was related to specific autistic traits. There were significant group differences in the modulation of activity by embarrassment ratings in the right insular (IC) and lateral orbitofrontal cortices. Task-related activity in the right IC was lower in the ASD group. The reduced activity in the right IC for self-face images was associated with weak coupling between cognitive evaluation and emotional responses to self-face images. The PCC is responsible for self-referential processing, and the IC plays a role in emotional experience. Dysfunction in these areas could contribute to the lack of self-conscious behaviors in response to self-reflection in ASD individuals.

  20. Quantitative image analysis for evaluating the coating thickness and pore distribution in coated small particles.

    PubMed

    Laksmana, F L; Van Vliet, L J; Hartman Kok, P J A; Vromans, H; Frijlink, H W; Van der Voort Maarschalk, K

    2009-04-01

    This study aims to develop a characterization method for coating structure based on image analysis, which is particularly promising for the rational design of coated particles in the pharmaceutical industry. The method applies the MATLAB image processing toolbox to images of coated particles taken with Confocal Laser Scanning Microscopy (CSLM). The coating thicknesses have been determined along the particle perimeter, from which a statistical analysis could be performed to obtain relevant thickness properties, e.g. the minimum coating thickness and the span of the thickness distribution. The characterization of the pore structure involved a proper segmentation of pores from the coating and a granulometry operation. The presented method facilitates the quantification of porosity, thickness and pore size distribution of a coating. These parameters are considered the important coating properties, which are critical to coating functionality. Additionally, the effect of the coating process variations on coating quality can straight-forwardly be assessed. Enabling a good characterization of the coating qualities, the presented method can be used as a fast and effective tool to predict coating functionality. This approach also enables the influence of different process conditions on coating properties to be effectively monitored, which latterly leads to process tailoring.

  1. An independent software system for the analysis of dynamic MR images.

    PubMed

    Torheim, G; Lombardi, M; Rinck, P A

    1997-01-01

    A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.

  2. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    PubMed

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  3. Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging

    PubMed Central

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk

    2009-01-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160

  4. Imaging spectroscopy links aspen genotype with below-ground processes at landscape scales

    PubMed Central

    Madritch, Michael D.; Kingdon, Clayton C.; Singh, Aditya; Mock, Karen E.; Lindroth, Richard L.; Townsend, Philip A.

    2014-01-01

    Fine-scale biodiversity is increasingly recognized as important to ecosystem-level processes. Remote sensing technologies have great potential to estimate both biodiversity and ecosystem function over large spatial scales. Here, we demonstrate the capacity of imaging spectroscopy to discriminate among genotypes of Populus tremuloides (trembling aspen), one of the most genetically diverse and widespread forest species in North America. We combine imaging spectroscopy (AVIRIS) data with genetic, phytochemical, microbial and biogeochemical data to determine how intraspecific plant genetic variation influences below-ground processes at landscape scales. We demonstrate that both canopy chemistry and below-ground processes vary over large spatial scales (continental) according to aspen genotype. Imaging spectrometer data distinguish aspen genotypes through variation in canopy spectral signature. In addition, foliar spectral variation correlates well with variation in canopy chemistry, especially condensed tannins. Variation in aspen canopy chemistry, in turn, is correlated with variation in below-ground processes. Variation in spectra also correlates well with variation in soil traits. These findings indicate that forest tree species can create spatial mosaics of ecosystem functioning across large spatial scales and that these patterns can be quantified via remote sensing techniques. Moreover, they demonstrate the utility of using optical properties as proxies for fine-scale measurements of biodiversity over large spatial scales. PMID:24733949

  5. Image resolution enhancement via image restoration using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangteng; Lu, Yihong

    2011-04-01

    Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.

  6. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  7. Modeling and possible implementation of self-learning equivalence-convolutional neural structures for auto-encoding-decoding and clusterization of images

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2017-08-01

    Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.

  8. An automatic alignment tool to improve repeatability of left ventricular function and dyssynchrony parameters in serial gated myocardial perfusion SPECT studies

    PubMed Central

    Zhou, Yanli; Faber, Tracy L.; Patel, Zenic; Folks, Russell D.; Cheung, Alice A.; Garcia, Ernest V.; Soman, Prem; Li, Dianfu; Cao, Kejiang; Chen, Ji

    2013-01-01

    Objective Left ventricular (LV) function and dyssynchrony parameters measured from serial gated single-photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) using blinded processing had a poorer repeatability than when manual side-by-side processing was used. The objective of this study was to validate whether an automatic alignment tool can reduce the variability of LV function and dyssynchrony parameters in serial gated SPECT MPI. Methods Thirty patients who had undergone serial gated SPECT MPI were prospectively enrolled in this study. Thirty minutes after the first acquisition, each patient was repositioned and a gated SPECT MPI image was reacquired. The two data sets were first processed blinded from each other by the same technologist in different weeks. These processed data were then realigned by the automatic tool, and manual side-by-side processing was carried out. All processing methods used standard iterative reconstruction and Butterworth filtering. The Emory Cardiac Toolbox was used to measure the LV function and dyssynchrony parameters. Results The automatic tool failed in one patient, who had a large, severe scar in the inferobasal wall. In the remaining 29 patients, the repeatability of the LV function and dyssynchrony parameters after automatic alignment was significantly improved from blinded processing and was comparable to manual side-by-side processing. Conclusion The automatic alignment tool can be an alternative method to manual side-by-side processing to improve the repeatability of LV function and dyssynchrony measurements by serial gated SPECT MPI. PMID:23211996

  9. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    NASA Astrophysics Data System (ADS)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  10. Using a Web OPAC To Deliver Digital Collections.

    ERIC Educational Resources Information Center

    Mathias, Eileen C.

    2003-01-01

    Describes a major digital imaging project just completed at the Ewell Sale Steward Library of the Academy of Natural Sciences (Philadelphia, PA). Discusses options that were considered for Web delivery of images and text, and reasons for choosing Innovative Interfaces, Inc.'s image management function. Describes the data entry process and reviews…

  11. A dual-function fluorescent probe for monitoring the degrees of hypoxia in living cells via the imaging of nitroreductase and adenosine triphosphate.

    PubMed

    Fang, Yu; Shi, Wen; Hu, Yiming; Li, Xiaohua; Ma, Huimin

    2018-05-24

    A new dual-function fluorescent probe is developed for detecting nitroreductase (NTR) and adenosine triphosphate (ATP) with different responses. Imaging application of the probe reveals that intracellular NTR and ATP display an adverse changing trend during a hypoxic process and ATP can serve as a new sign for cell hypoxia.

  12. Functional Near-Infrared Spectroscopy Brain Imaging Investigation of Phonological Awareness and Passage Comprehension Abilities in Adult Recipients of Cochlear Implants

    ERIC Educational Resources Information Center

    Bisconti, Silvia; Shulkin, Masha; Hu, Xiaosu; Basura, Gregory J.; Kileny, Paul R.; Kovelman, Ioulia

    2016-01-01

    Purpose: The aim of this study was to examine how the brains of individuals with cochlear implants (CIs) respond to spoken language tasks that underlie successful language acquisition and processing. Method: During functional near-infrared spectroscopy imaging, CI recipients with hearing impairment (n = 10, mean age: 52.7 ± 17.3 years) and…

  13. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  14. Design, fabrication and actuation of a MEMS-based image stabilizer for photographic cell phone applications

    NASA Astrophysics Data System (ADS)

    Chiou, Jin-Chern; Hung, Chen-Chun; Lin, Chun-Ying

    2010-07-01

    This work presents a MEMS-based image stabilizer applied for anti-shaking function in photographic cell phones. The proposed stabilizer is designed as a two-axis decoupling XY stage 1.4 × 1.4 × 0.1 mm3 in size, and adequately strong to suspend an image sensor for anti-shaking photographic function. This stabilizer is fabricated by complex fabrication processes, including inductively coupled plasma (ICP) processes and flip-chip bonding technique. Based on the special designs of a hollow handle layer and a corresponding wire-bonding assisted holder, electrical signals of the suspended image sensor can be successfully sent out with 32 signal springs without incurring damage during wire-bonding packaging. The longest calculated traveling distance of the stabilizer is 25 µm which is sufficient to resolve the anti-shaking problem in a three-megapixel image sensor. Accordingly, the applied voltage for the 25 µm moving distance is 38 V. Moreover, the resonant frequency of the actuating device with the image sensor is 1.123 kHz.

  15. Promise of new imaging technologies for assessing ovarian function.

    PubMed

    Singh, Jaswant; Adams, Gregg P; Pierson, Roger A

    2003-10-15

    Advancements in imaging technologies over the last two decades have ushered a quiet revolution in research approaches to the study of ovarian structure and function. The most significant changes in our understanding of the ovary have resulted from the use of ultrasonography which has enabled sequential analyses in live animals. Computer-assisted image analysis and mathematical modeling of the dynamic changes within the ovary has permitted exciting new avenues of research with readily quantifiable endpoints. Spectral, color-flow and power Doppler imaging now facilitate physiologic interpretations of vascular dynamics over time. Similarly, magnetic resonance imaging (MRI) is emerging as a research tool in ovarian imaging. New technologies, such as three-dimensional ultrasonography and MRI, ultrasound-based biomicroscopy and synchrotron-based techniques each have the potential to enhance our real-time picture of ovarian function to the near-cellular level. Collectively, information available in ultrasonography, MRI, computer-assisted image analysis and mathematical modeling heralds a new era in our understanding of the basic processes of female and male reproduction.

  16. Glioblastoma Presenting with Pure Alexia and Palinopsia Involving the Left Inferior Occipital Gyrus and Visual Word Form Area Evaluated with Functional Magnetic Resonance Imaging and Diffusion Tensor Imaging Tractography.

    PubMed

    Huang, Meng; Baskin, David S; Fung, Steve

    2016-05-01

    Rapid word recognition and reading fluency is a specialized cortical process governed by the visual word form area (VWFA), which is localized to the dominant posterior lateral occipitotemporal sulcus/fusiform gyrus. A lesion of the VWFA results in pure alexia without agraphia characterized by letter-by-letter reading. Palinopsia is a visual processing distortion characterized by persistent afterimages and has been reported in lesions involving the nondominant occipitotemporal cortex. A 67-year-old right-handed woman with no neurologic history presented to our emergency department with acute cortical ischemic symptoms that began with a transient episode of receptive aphasia. She also reported inability to read, albeit with retained writing ability. She also saw afterimages of objects. During her stroke workup, an intra-axial circumscribed enhancing mass lesion was discovered involving her dominant posterolateral occipitotemporal lobe. Given the eloquent brain involvement, she underwent preoperative functional magnetic resonance imaging with diffusion tensor imaging tractography and awake craniotomy to maximize resection and preserve function. Many organic lesions involving these regions have been reported in the literature, but to the best of our knowledge, glioblastoma involving the VWFA resulting in both clinical syndromes of pure alexia and palinopsia with superimposed functional magnetic resonance imaging and fiber tract mapping has never been reported before. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Information theoretic analysis of edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2010-08-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.

  18. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

    PubMed Central

    Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki

    2015-01-01

    This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760

  19. Improved blood velocity measurements with a hybrid image filtering and iterative Radon transform algorithm

    PubMed Central

    Chhatbar, Pratik Y.; Kara, Prakash

    2013-01-01

    Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature. PMID:23807877

  20. MIRIADS: miniature infrared imaging applications development system description and operation

    NASA Astrophysics Data System (ADS)

    Baxter, Christopher R.; Massie, Mark A.; McCarley, Paul L.; Couture, Michael E.

    2001-10-01

    A cooperative effort between the U.S. Air Force Research Laboratory, Nova Research, Inc., the Raytheon Infrared Operations (RIO) and Optics 1, Inc. has successfully produced a miniature infrared camera system that offers significant real-time signal and image processing capabilities by virtue of its modular design. This paper will present an operational overview of the system as well as results from initial testing of the 'Modular Infrared Imaging Applications Development System' (MIRIADS) configured as a missile early-warning detection system. The MIRIADS device can operate virtually any infrared focal plane array (FPA) that currently exists. Programmable on-board logic applies user-defined processing functions to the real-time digital image data for a variety of functions. Daughterboards may be plugged onto the system to expand the digital and analog processing capabilities of the system. A unique full hemispherical infrared fisheye optical system designed and produced by Optics 1, Inc. is utilized by the MIRIADS in a missile warning application to demonstrate the flexibility of the overall system to be applied to a variety of current and future AFRL missions.

  1. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  2. Current advances in molecular imaging: noninvasive in vivo bioluminescent and fluorescent optical imaging in cancer research.

    PubMed

    Choy, Garry; Choyke, Peter; Libutti, Steven K

    2003-10-01

    Recently, there has been tremendous interest in developing techniques such as MRI, micro-CT, micro-PET, and SPECT to image function and processes in small animals. These technologies offer deep tissue penetration and high spatial resolution, but compared with noninvasive small animal optical imaging, these techniques are very costly and time consuming to implement. Optical imaging is cost-effective, rapid, easy to use, and can be readily applied to studying disease processes and biology in vivo. In vivo optical imaging is the result of a coalescence of technologies from chemistry, physics, and biology. The development of highly sensitive light detection systems has allowed biologists to use imaging in studying physiological processes. Over the last few decades, biochemists have also worked to isolate and further develop optical reporters such as GFP, luciferase, and cyanine dyes. This article reviews the common types of fluorescent and bioluminescent optical imaging, the typical system platforms and configurations, and the applications in the investigation of cancer biology.

  3. Rotation-robust math symbol recognition and retrieval using outer contours and image subsampling

    NASA Astrophysics Data System (ADS)

    Zhu, Siyu; Hu, Lei; Zanibbi, Richard

    2013-01-01

    This paper presents an unified recognition and retrieval system for isolated offline printed mathematical symbols for the first time. The system is based on nearest neighbor scheme and uses modified Turning Function and Grid Features to calculate the distance between two symbols based on Sum of Squared Difference. An unwrap process and an alignment process are applied to modify Turning Function to deal with the horizontal and vertical shift caused by the changing of staring point and rotation. This modified Turning Function make our system robust against rotation of the symbol image. The system obtains top-1 recognition rate of 96.90% and 47.27% Area Under Curve (AUC) of precision/recall plot on the InftyCDB-3 dataset. Experiment result shows that the system with modified Turning Function performs significantly better than the system with original Turning Function on the rotated InftyCDB-3 dataset.

  4. Smart image sensors: an emerging key technology for advanced optical measurement and microsystems

    NASA Astrophysics Data System (ADS)

    Seitz, Peter

    1996-08-01

    Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.

  5. Platform-independent software for medical image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin

    1997-05-01

    We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.

  6. [Image processing applying in analysis of motion features of cultured cardiac myocyte in rat].

    PubMed

    Teng, Qizhi; He, Xiaohai; Luo, Daisheng; Wang, Zhengrong; Zhou, Beiyi; Yuan, Zhirun; Tao, Dachang

    2007-02-01

    Study of mechanism of medicine actions, by quantitative analysis of cultured cardiac myocyte, is one of the cutting edge researches in myocyte dynamics and molecular biology. The characteristics of cardiac myocyte auto-beating without external stimulation make the research sense. Research of the morphology and cardiac myocyte motion using image analysis can reveal the fundamental mechanism of medical actions, increase the accuracy of medicine filtering, and design the optimal formula of medicine for best medical treatments. A system of hardware and software has been built with complete sets of functions including living cardiac myocyte image acquisition, image processing, motion image analysis, and image recognition. In this paper, theories and approaches are introduced for analysis of living cardiac myocyte motion images and implementing quantitative analysis of cardiac myocyte features. A motion estimation algorithm is used for motion vector detection of particular points and amplitude and frequency detection of a cardiac myocyte. Beatings of cardiac myocytes are sometimes very small. In such case, it is difficult to detect the motion vectors from the particular points in a time sequence of images. For this reason, an image correlation theory is employed to detect the beating frequencies. Active contour algorithm in terms of energy function is proposed to approximate the boundary and detect the changes of edge of myocyte.

  7. A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer

    PubMed Central

    Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie

    2014-01-01

    Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727

  8. Methodological considerations in conducting an olfactory fMRI study.

    PubMed

    Vedaei, Faezeh; Fakhri, Mohammad; Harirchian, Mohammad Hossein; Firouznia, Kavous; Lotfi, Yones; Ali Oghabian, Mohammad

    2013-01-01

    The sense of smell is a complex chemosensory processing in human and animals that allows them to connect with the environment as one of their chief sensory systems. In the field of functional brain imaging, many studies have focused on locating brain regions that are involved during olfactory processing. Despite wealth of literature about brain network in different olfactory tasks, there is a paucity of data regarding task design. Moreover, considering importance of olfactory tasks for patients with variety of neurological diseases, special contemplations should be addressed for patients. In this article, we review current olfaction tasks for behavioral studies and functional neuroimaging assessments, as well as technical principles regarding utilization of these tasks in functional magnetic resonance imaging studies.

  9. Study on Vignetting Correction of Uav Images and Its Application to 2010 Ms7.0 Lushan Earthquake, China

    NASA Astrophysics Data System (ADS)

    Yuan, X.; Wang, X.; Dou, A.; Ding, X.

    2014-12-01

    As the UAV is widely used in earthquake disaster prevention and mitigation, the efficiency of UAV image processing determines the effectiveness of its application to pre-earthquake disaster prevention, post-earthquake emergency rescue, and disaster assessment. Because of bad weather conditions after destructive earthquake, the wide field cameras captured images with serious vignetting phenomenon, which can significantly affects the speed and efficiency of image mosaic, especially the extraction of pre-earthquake building and geological structure information and also the accuracy of post-earthquake quantitative damage extraction. In this paper, an improved radial gradient correction method (IRGCM) was developed to reduce the influence from random distribution of land surface objects on the images based on radial gradient correction method (RGCM, Y. Zheng, 2008; 2013). First, a mean-value image was obtained by the average of serial UAV images. It was used as calibration instead of single images to obtain the comprehensive vignetting function by using RGCM. Then each UAV image would be corrected by the comprehensive vignetting function. A case study was done to correct the UAV images sequence, which were obtained in Lushan County after Ms7.0 Lushan, Sichuan, China earthquake occurred on April 20, 2013. The results show that the comprehensive vignetting function generated by IRGCM is more robust and accurate to express the specific optical response of camera in a particular setting. Thus it is particularly useful for correction of a mass UAV images with non-uniform illuminations. Also, the correction process was simplified and it is faster than conventional methods. After correction, the images have better radial homogeneity and clearer details, to a certain extent, which reduces the difficulties of image mosaic, and provides a better result for further analysis and damage information extraction. Further test shows also that better results were obtained by taking advantage of comprehensive vignetting function to the other UAV image sequences from different regions. The research was supported by these projects, NO.2012BAK15B02 and 2013IES010106.

  10. Comparing diffuse optical tomography and functional magnetic resonance imaging signals during a cognitive task: pilot study

    PubMed Central

    Hernández-Martin, Estefania; Marcano, Francisco; Casanova, Oscar; Modroño, Cristian; Plata-Bello, Julio; González-Mora, Jose Luis

    2017-01-01

    Abstract. Diffuse optical tomography (DOT) measures concentration changes in both oxy- and deoxyhemoglobin providing three-dimensional images of local brain activations. A pilot study, which compares both DOT and functional magnetic resonance imaging (fMRI) volumes through t-maps given by canonical statistical parametric mapping (SPM) processing for both data modalities, is presented. The DOT series were processed using a method that is based on a Bayesian filter application on raw DOT data to remove physiological changes and minimum description length application index to select a number of singular values, which reduce the data dimensionality during image reconstruction and adaptation of DOT volume series to normalized standard space. Therefore, statistical analysis is performed with canonical SPM software in the same way as fMRI analysis is done, accepting DOT volumes as if they were fMRI volumes. The results show the reproducibility and ruggedness of the method to process DOT series on group analysis using cognitive paradigms on the prefrontal cortex. Difficulties such as the fact that scalp–brain distances vary between subjects or cerebral activations are difficult to reproduce due to strategies used by the subjects to solve arithmetic problems are considered. T-images given by fMRI and DOT volume series analyzed in SPM show that at the functional level, both DOT and fMRI measures detect the same areas, although DOT provides complementary information to fMRI signals about cerebral activity. PMID:28386575

  11. Speech processing asymmetry revealed by dichotic listening and functional brain imaging.

    PubMed

    Hugdahl, Kenneth; Westerhausen, René

    2016-12-01

    In this article, we review research in our laboratory from the last 25 to 30 years on the neuronal basis for laterality of speech perception focusing on the upper, posterior parts of the temporal lobes, and its functional and structural connections to other brain regions. We review both behavioral and brain imaging data, with a focus on dichotic listening experiments, and using a variety of imaging modalities. The data have come in most parts from healthy individuals and from studies on normally functioning brain, although we also review a few selected clinical examples. We first review and discuss the structural model for the explanation of the right-ear advantage (REA) and left hemisphere asymmetry for auditory language processing. A common theme across many studies have been our interest in the interaction between bottom-up, stimulus-driven, and top-down, instruction-driven, aspects of hemispheric asymmetry, and how perceptual factors interact with cognitive factors to shape asymmetry of auditory language information processing. In summary, our research have shown laterality for the initial processing of consonant-vowel syllables, first observed as a behavioral REA when subjects are required to report which syllable of a dichotic syllable-pair they perceive. In subsequent work we have corroborated the REA with brain imaging, and have shown that the REA is modulated through both bottom-up manipulations of stimulus properties, like sound intensity, and top-down manipulations of cognitive properties, like attention focus. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Digital image comparison by subtracting contextual transformations—percentile rank order differentiation

    USGS Publications Warehouse

    Wehde, M. E.

    1995-01-01

    The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.

  13. Erotic Stimulus Processing under Amisulpride and Reboxetine: A Placebo-Controlled fMRI Study in Healthy Subjects

    PubMed Central

    Wiegers, Maike; Metzger, Coraline D.; Walter, Martin; Grön, Georg; Abler, Birgit

    2015-01-01

    Background: Impaired sexual function is increasingly recognized as a side effect of psychopharmacological treatment. However, underlying mechanisms of action of the different drugs on sexual processing are still to be explored. Using functional magnetic resonance imaging, we previously investigated effects of serotonergic (paroxetine) and dopaminergic (bupropion) antidepressants on sexual functioning (Abler et al., 2011). Here, we studied the impact of noradrenergic and antidopaminergic medication on neural correlates of visual sexual stimulation in a new sample of subjects. Methods: Nineteen healthy heterosexual males (mean age 24 years, SD 3.1) under subchronic intake (7 days) of the noradrenergic agent reboxetine (4mg/d), the antidopaminergic agent amisulpride (200mg/d), and placebo were included and studied with functional magnetic resonance imaging within a randomized, double-blind, placebo-controlled, within-subjects design during an established erotic video-clip task. Subjective sexual functioning was assessed using the Massachusetts General Hospital-Sexual Functioning Questionnaire. Results: Relative to placebo, subjective sexual functioning was attenuated under reboxetine along with diminished neural activations within the caudate nucleus. Altered neural activations correlated with decreased sexual interest. Under amisulpride, neural activations and subjective sexual functioning remained unchanged. Conclusions: In line with previous interpretations of the role of the caudate nucleus in the context of primary reward processing, attenuated caudate activation may reflect detrimental effects on motivational aspects of erotic stimulus processing under noradrenergic agents. PMID:25612894

  14. Feature tracking cardiac magnetic resonance imaging: A review of a novel non-invasive cardiac imaging technique

    PubMed Central

    Rahman, Zia Ur; Sethi, Pooja; Murtaza, Ghulam; Virk, Hafeez Ul Hassan; Rai, Aitzaz; Mahmod, Masliza; Schoondyke, Jeffrey; Albalbissi, Kais

    2017-01-01

    Cardiovascular disease is a leading cause of morbidity and mortality globally. Early diagnostic markers are gaining popularity for better patient care disease outcomes. There is an increasing interest in noninvasive cardiac imaging biomarkers to diagnose subclinical cardiac disease. Feature tracking cardiac magnetic resonance imaging is a novel post-processing technique that is increasingly being employed to assess global and regional myocardial function. This technique has numerous applications in structural and functional diagnostics. It has been validated in multiple studies, although there is still a long way to go for it to become routine standard of care. PMID:28515849

  15. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  16. Functional magnetic resonance imaging in oncology: state of the art.

    PubMed

    Guimaraes, Marcos Duarte; Schuch, Alice; Hochhegger, Bruno; Gross, Jefferson Luiz; Chojniak, Rubens; Marchiori, Edson

    2014-01-01

    In the investigation of tumors with conventional magnetic resonance imaging, both quantitative characteristics, such as size, edema, necrosis, and presence of metastases, and qualitative characteristics, such as contrast enhancement degree, are taken into consideration. However, changes in cell metabolism and tissue physiology which precede morphological changes cannot be detected by the conventional technique. The development of new magnetic resonance imaging techniques has enabled the functional assessment of the structures in order to obtain information on the different physiological processes of the tumor microenvironment, such as oxygenation levels, cellularity and vascularity. The detailed morphological study in association with the new functional imaging techniques allows for an appropriate approach to cancer patients, including the phases of diagnosis, staging, response evaluation and follow-up, with a positive impact on their quality of life and survival rate.

  17. Ultramap: the all in One Photogrammetric Solution

    NASA Astrophysics Data System (ADS)

    Wiechert, A.; Gruber, M.; Karner, K.

    2012-07-01

    This paper describes in detail the dense matcher developed since years by Vexcel Imaging in Graz for Microsoft's Bing Maps project. This dense matcher was exclusively developed for and used by Microsoft for the production of the 3D city models of Virtual Earth. It will now be made available to the public with the UltraMap software release mid-2012. That represents a revolutionary step in digital photogrammetry. The dense matcher generates digital surface models (DSM) and digital terrain models (DTM) automatically out of a set of overlapping UltraCam images. The models have an outstanding point density of several hundred points per square meter and sub-pixel accuracy and are generated automatically. The dense matcher consists of two steps. The first step rectifies overlapping image areas to speed up the dense image matching process. This rectification step ensures a very efficient processing and detects occluded areas by applying a back-matching step. In this dense image matching process a cost function consisting of a matching score as well as a smoothness term is minimized. In the second step the resulting range image patches are fused into a DSM by optimizing a global cost function. The whole process is optimized for multi-core CPUs and optionally uses GPUs if available. UltraMap 3.0 features also an additional step which is presented in this paper, a complete automated true-ortho and ortho workflow. For this, the UltraCam images are combined with the DSM or DTM in an automated rectification step and that results in high quality true-ortho or ortho images as a result of a highly automated workflow. The paper presents the new workflow and first results.

  18. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  19. The neuronal correlates of intranasal trigeminal function – An ALE meta-analysis of human functional brain imaging data

    PubMed Central

    Albrecht, Jessica; Kopietz, Rainer; Frasnelli, Johannes; Wiesmann, Martin; Hummel, Thomas; Lundström, Johan N.

    2009-01-01

    Almost every odor we encounter in daily life has the capacity to produce a trigeminal sensation. Surprisingly, few functional imaging studies exploring human neuronal correlates of intranasal trigeminal function exist, and results are to some degree inconsistent. We utilized activation likelihood estimation (ALE), a quantitative voxel-based meta-analysis tool, to analyze functional imaging data (fMRI/PET) following intranasal trigeminal stimulation with carbon dioxide (CO2), a stimulus known to exclusively activate the trigeminal system. Meta-analysis tools are able to identify activations common across studies, thereby enabling activation mapping with higher certainty. Activation foci of nine studies utilizing trigeminal stimulation were included in the meta-analysis. We found significant ALE scores, thus indicating consistent activation across studies, in the brainstem, ventrolateral posterior thalamic nucleus, anterior cingulate cortex, insula, precentral gyrus, as well as in primary and secondary somatosensory cortices – a network known for the processing of intranasal nociceptive stimuli. Significant ALE values were also observed in the piriform cortex, insula, and the orbitofrontal cortex, areas known to process chemosensory stimuli, and in association cortices. Additionally, the trigeminal ALE statistics were directly compared with ALE statistics originating from olfactory stimulation, demonstrating considerable overlap in activation. In conclusion, the results of this meta-analysis map the human neuronal correlates of intranasal trigeminal stimulation with high statistical certainty and demonstrate that the cortical areas recruited during the processing of intranasal CO2 stimuli include those outside traditional trigeminal areas. Moreover, through illustrations of the considerable overlap between brain areas that process trigeminal and olfactory information; these results demonstrate the interconnectivity of flavor processing. PMID:19913573

  20. Automatic tissue image segmentation based on image processing and deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  1. Study on Mosaic and Uniform Color Method of Satellite Image Fusion in Large Srea

    NASA Astrophysics Data System (ADS)

    Liu, S.; Li, H.; Wang, X.; Guo, L.; Wang, R.

    2018-04-01

    Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.

  2. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  3. Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.

    PubMed

    Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo

    2003-04-01

    An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.

  4. Anterior cingulate hyperactivations during negative emotion processing among men with schizophrenia and a history of violent behavior.

    PubMed

    Tikàsz, Andràs; Potvin, Stéphane; Lungu, Ovidiu; Joyal, Christian C; Hodgins, Sheilagh; Mendrek, Adrianna; Dumais, Alexandre

    2016-01-01

    Evidence suggests a 2.1-4.6 times increase in the risk of violent behavior in schizophrenia compared to the general population. Current theories propose that the processing of negative emotions is defective in violent individuals and that dysfunctions within the neural circuits involved in emotion processing are implicated in violence. Although schizophrenia patients show enhanced sensitivity to negative stimuli, there are only few functional neuroimaging studies that have examined emotion processing among men with schizophrenia and a history of violence. The present study aimed to identify the brain regions with greater neurofunctional alterations, as detected by functional magnetic resonance imaging during an emotion processing task, of men with schizophrenia who had engaged in violent behavior compared with those who had not. Sixty men were studied; 20 with schizophrenia and a history of violence, 19 with schizophrenia and no violence, and 21 healthy men were scanned while viewing positive, negative, and neutral images. Negative images elicited hyperactivations in the anterior cingulate cortex (ACC), left and right lingual gyrus, and the left precentral gyrus in violent men with schizophrenia, compared to nonviolent men with schizophrenia and healthy men. Neutral images elicited hyperactivations in the right and left middle occipital gyrus, left lingual gyrus, and the left fusiform gyrus in violent men with schizophrenia, compared to the other two groups. Violent men with schizophrenia displayed specific increases in ACC in response to negative images. Given the role of the ACC in information integration, these results indicate a specific dysfunction in the processing of negative emotions that may trigger violent behavior in men with schizophrenia.

  5. Gender differences in brain activity generated by unpleasant word stimuli concerning body image: an fMRI study.

    PubMed

    Shirao, Naoko; Okamoto, Yasumasa; Mantani, Tomoyuki; Okamoto, Yuri; Yamawaki, Shigeto

    2005-01-01

    We have previously reported that the temporomesial area, including the amygdala, is activated in women when processing unpleasant words concerning body image. To detect gender differences in brain activation during processing of these words. Functional magnetic resonance imaging was used to investigate 13 men and 13 women during an emotional decision task consisting of unpleasant words concerning body image and neutral words. The left medial prefrontal cortex and hippocampus were activated only among men, and the left amygdala was activated only among women during the task; activation in the apical prefrontal region was significantly greater in men than in women. Our data suggest that the prefrontal region is responsible for the gender differences in the processing of words concerning body image, and may also be responsible for gender differences in susceptibility to eating disorders.

  6. Instructional image processing on a university mainframe: The Kansas system

    NASA Technical Reports Server (NTRS)

    Williams, T. H. L.; Siebert, J.; Gunn, C.

    1981-01-01

    An interactive digital image processing program package was developed that runs on the University of Kansas central computer, a Honeywell Level 66 multi-processor system. The module form of the package allows easy and rapid upgrades and extensions of the system and is used in remote sensing courses in the Department of Geography, in regional five-day short courses for academics and professionals, and also in remote sensing projects and research. The package comprises three self-contained modules of processing functions: Subimage extraction and rectification; image enhancement, preprocessing and data reduction; and classification. Its use in a typical course setting is described. Availability and costs are considered.

  7. Dual-slit confocal light sheet microscopy for in vivo whole-brain imaging of zebrafish

    PubMed Central

    Yang, Zhe; Mei, Li; Xia, Fei; Luo, Qingming; Fu, Ling; Gong, Hui

    2015-01-01

    In vivo functional imaging at single-neuron resolution is an important approach to visualize biological processes in neuroscience. Light sheet microscopy (LSM) is a cutting edge in vivo imaging technique that provides micron-scale spatial resolution at high frame rate. Due to the scattering and absorption of tissue, however, conventional LSM is inadequate to resolve cells because of the attenuated signal to noise ratio (SNR). Using dual-beam illumination and confocal dual-slit detection, here a dual-slit confocal LSM is demonstrated to obtain the SNR enhanced images with frame rate twice as high as line confocal LSM method. Through theoretical calculations and experiments, the correlation between the slit’s width and SNR was determined to optimize the image quality. In vivo whole brain structural imaging stacks and the functional imaging sequences of single slice were obtained for analysis of calcium activities at single-cell resolution. A two-fold increase in imaging speed of conventional confocal LSM makes it possible to capture the sequence of the neurons’ activities and help reveal the potential functional connections in the whole zebrafish’s brain. PMID:26137381

  8. Education and the Living Image: Reflections on Imagery, Fantasy, and the Art of Recognition.

    ERIC Educational Resources Information Center

    Abbs, Peter

    1981-01-01

    The educational role of the artist is close to that of the dreamer in the sense that they are active collaborators in the extraordinary process through which instinct and bodily function are converted into image and fantasy. The development of an image can release powerful flows of intellectual energy. (JN)

  9. The potential of multiparametric MRI of the breast

    PubMed Central

    Pinker, Katja; Helbich, Thomas H

    2017-01-01

    MRI is an essential tool in breast imaging, with multiple established indications. Dynamic contrast-enhanced MRI (DCE-MRI) is the backbone of any breast MRI protocol and has an excellent sensitivity and good specificity for breast cancer diagnosis. DCE-MRI provides high-resolution morphological information, as well as some functional information about neoangiogenesis as a tumour-specific feature. To overcome limitations in specificity, several other functional MRI parameters have been investigated and the application of these combined parameters is defined as multiparametric MRI (mpMRI) of the breast. MpMRI of the breast can be performed at different field strengths (1.5–7 T) and includes both established (diffusion-weighted imaging, MR spectroscopic imaging) and novel MRI parameters (sodium imaging, chemical exchange saturation transfer imaging, blood oxygen level-dependent MRI), as well as hybrid imaging with positron emission tomography (PET)/MRI and different radiotracers. Available data suggest that multiparametric imaging using different functional MRI and PET parameters can provide detailed information about the underlying oncogenic processes of cancer development and progression and can provide additional specificity. This article will review the current and emerging functional parameters for mpMRI of the breast for improved diagnostic accuracy in breast cancer. PMID:27805423

  10. Functional imaging of semantic memory predicts postoperative episodic memory functions in chronic temporal lobe epilepsy.

    PubMed

    Köylü, Bülent; Walser, Gerald; Ischebeck, Anja; Ortler, Martin; Benke, Thomas

    2008-08-05

    Medial temporal (MTL) structures have crucial functions in episodic (EM), but also in semantic memory (SM) processing. Preoperative functional magnetic resonance imaging (fMRI) activity within the MTL is increasingly used to predict post-surgical memory capacities. Based on the hypothesis that EM and SM memory functions are both hosted by the MTL the present study wanted to explore the relationship between SM related activations in the MTL as assessed before and the capacity of EM functions after surgery. Patients with chronic unilateral left (n=14) and right (n=12) temporal lobe epilepsy (TLE) performed a standard word list learning test pre- and postoperatively, and a fMRI procedure before the operation using a semantic decision task. SM processing caused significant bilateral MTL activations in both patient groups. While right TLE patients showed asymmetry of fMRI activation with more activation in the left MTL, left TLE patients had almost equal activation in both MTL regions. Contrasting left TLE versus right TLE patients revealed greater activity within the right MTL, whereas no significant difference was observed for the reverse contrast. Greater effect size in the MTL region ipsilateral to the seizure focus was significantly and positively correlated with preoperative EM abilities. Greater effect size in the contralateral MTL was correlated with better postoperative verbal EM, especially in left TLE patients. These results suggest that functional imaging of SM tasks may be useful to predict postoperative verbal memory in TLE. They also advocate a common neuroanatomical basis for SM and EM processes in the MTL.

  11. Atypical Brain Activation during Simple & Complex Levels of Processing in Adult ADHD: An fMRI Study

    ERIC Educational Resources Information Center

    Hale, T. Sigi; Bookheimer, Susan; McGough, James J.; Phillips, Joseph M.; McCracken, James T.

    2007-01-01

    Objective: Executive dysfunction in ADHD is well supported. However, recent studies suggest that more fundamental impairments may be contributing. We assessed brain function in adults with ADHD during simple and complex forms of processing. Method: We used functional magnetic resonance imaging with forward and backward digit spans to investigate…

  12. A Hitchhiker's Guide to Functional Magnetic Resonance Imaging

    PubMed Central

    Soares, José M.; Magalhães, Ricardo; Moreira, Pedro S.; Sousa, Alexandre; Ganz, Edward; Sampaio, Adriana; Alves, Victor; Marques, Paulo; Sousa, Nuno

    2016-01-01

    Functional Magnetic Resonance Imaging (fMRI) studies have become increasingly popular both with clinicians and researchers as they are capable of providing unique insights into brain functions. However, multiple technical considerations (ranging from specifics of paradigm design to imaging artifacts, complex protocol definition, and multitude of processing and methods of analysis, as well as intrinsic methodological limitations) must be considered and addressed in order to optimize fMRI analysis and to arrive at the most accurate and grounded interpretation of the data. In practice, the researcher/clinician must choose, from many available options, the most suitable software tool for each stage of the fMRI analysis pipeline. Herein we provide a straightforward guide designed to address, for each of the major stages, the techniques, and tools involved in the process. We have developed this guide both to help those new to the technique to overcome the most critical difficulties in its use, as well as to serve as a resource for the neuroimaging community. PMID:27891073

  13. Initial experience with a nuclear medicine viewing workstation

    NASA Astrophysics Data System (ADS)

    Witt, Robert M.; Burt, Robert W.

    1992-07-01

    Graphical User Interfaced (GUI) workstations are now available from commercial vendors. We recently installed a GUI workstation in our nuclear medicine reading room for exclusive use of staff and resident physicians. The system is built upon a Macintosh platform and has been available as a DELTAmanager from MedImage and more recently as an ICON V from Siemens Medical Systems. The workstation provides only display functions and connects to our existing nuclear medicine imaging system via ethernet. The system has some processing capabilities to create oblique, sagittal and coronal views from transverse tomographic views. Hard copy output is via a screen save device and a thermal color printer. The DELTAmanager replaced a MicroDELTA workstation which had both process and view functions. The mouse activated GUI has made remarkable changes to physicians'' use of the nuclear medicine viewing system. Training time to view and review studies has been reduced from hours to about 30-minutes. Generation of oblique views and display of brain and heart tomographic studies has been reduced from about 30-minutes of technician''s time to about 5-minutes of physician''s time. Overall operator functionality has been increased so that resident physicians with little prior computer experience can access all images on the image server and display pertinent patient images when consulting with other staff.

  14. Suggestion-Induced Modulation of Semantic Priming during Functional Magnetic Resonance Imaging

    PubMed Central

    Ulrich, Martin; Kiefer, Markus; Bongartz, Walter; Grön, Georg; Hoenig, Klaus

    2015-01-01

    Using functional magnetic resonance imaging during a primed visual lexical decision task, we investigated the neural and functional mechanisms underlying modulations of semantic word processing through hypnotic suggestions aimed at altering lexical processing of primes. The priming task was to discriminate between target words and pseudowords presented 200 ms after the prime word which was semantically related or unrelated to the target. In a counterbalanced study design, each participant performed the task once at normal wakefulness and once after the administration of hypnotic suggestions to perceive the prime as a meaningless symbol of a foreign language. Neural correlates of priming were defined as significantly lower activations upon semantically related compared to unrelated trials. We found significant suggestive treatment-induced reductions in neural priming, albeit irrespective of the degree of suggestibility. Neural priming was attenuated upon suggestive treatment compared with normal wakefulness in brain regions supporting automatic (fusiform gyrus) and controlled semantic processing (superior and middle temporal gyri, pre- and postcentral gyri, and supplementary motor area). Hence, suggestions reduced semantic word processing by conjointly dampening both automatic and strategic semantic processes. PMID:25923740

  15. Optimizing MR imaging-guided navigation for focused ultrasound interventions in the brain

    NASA Astrophysics Data System (ADS)

    Werner, B.; Martin, E.; Bauer, R.; O'Gorman, R.

    2017-03-01

    MR imaging during transcranial MR imaging-guided Focused Ultrasound surgery (tcMRIgFUS) is challenging due to the complex ultrasound transducer setup and the water bolus used for acoustic coupling. Achievable image quality in the tcMRIgFUS setup using the standard body coil is significantly inferior to current neuroradiologic standards. As a consequence, MR image guidance for precise navigation in functional neurosurgical interventions using tcMRIgFUS is basically limited to the acquisition of MR coordinates of salient landmarks such as the anterior and posterior commissure for aligning a stereotactic atlas. Here, we show how improved MR image quality provided by a custom built MR coil and optimized MR imaging sequences can support imaging-guided navigation for functional tcMRIgFUS neurosurgery by visualizing anatomical landmarks that can be integrated into the navigation process to accommodate for patient specific anatomy.

  16. TIA Software User's Manual

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Syed, Hazari I.

    1995-01-01

    This user's manual describes the installation and operation of TIA, the Thermal-Imaging acquisition and processing Application, developed by the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center, Hampton, Virginia. TIA is a user friendly graphical interface application for the Macintosh 2 and higher series computers. The software has been developed to interface with the Perceptics/Westinghouse Pixelpipe(TM) and PixelStore(TM) NuBus cards and the GW Instruments MacADIOS(TM) input-output (I/O) card for the Macintosh for imaging thermal data. The software is also capable of performing generic image-processing functions.

  17. Image deblurring using a joint entropy prior in x-ray luminescence computed tomography

    NASA Astrophysics Data System (ADS)

    Su, Chang; Dutta, Joyita; Zhang, Hui; El Fakhri, Georges; Li, Quanzheng

    2017-03-01

    X-ray luminescence computed tomography (XLCT) is an emerging hybrid imaging modality that can provide functional and anatomical images at the same time. Traditional narrow beam XLCT can achieve high spatial resolution as well as high sensitivity. However, by treating the CCD camera as a single pixel detector, this kind of scheme resembles the first generation of CT scanner which results in a long scanning time and a high radiation dose. Although cone beam or fan beam XLCT has the ability to mitigate this problem with an optical propagation model introduced, image quality is affected because the inverse problem is ill-conditioned. Much effort has been done to improve the image quality through hardware improvements or by developing new reconstruction techniques for XLCT. The objective of this work is to further enhance the already reconstructed image by introducing anatomical information through retrospective processing. The deblurring process used a spatially variant point spread function (PSF) model and a joint entropy based anatomical prior derived from a CT image acquired using the same XLCT system. A numerical experiment was conducted with a real mouse CT image from the Digimouse phantom used as the anatomical prior. The resultant images of bone and lung regions showed sharp edges and good consistency with the CT image. Activity error was reduced by 52.3% even for nanophosphor lesion size as small as 0.8mm.

  18. Activity in the fusiform face area supports expert perception in radiologists and does not depend upon holistic processing of images

    NASA Astrophysics Data System (ADS)

    Engel, Stephen A.; Harley, Erin M.; Pope, Whitney B.; Villablanca, J. Pablo; Mazziotta, John C.; Enzmann, Dieter

    2009-02-01

    Training in radiology dramatically changes observers' ability to process images, but the neural bases of this visual expertise remain unexplored. Prior imaging work has suggested that the fusiform face area (FFA), normally selectively responsive to faces, becomes responsive to images in observers' area of expertise. The FFA has been hypothesized to be important for "holistic" processing that integrates information across the entire image. Here, we report a cross-sectional study of radiologists that used functional magnetic resonance imaging to measure neural activity in first-year radiology residents, fourth-year radiology residents, and practicing radiologists as they detected abnormalities in chest radiographs. Across subjects, activity in the FFA correlated with visual expertise, measured as behavioral performance during scanning. To test whether processing in the FFA was holistic, we measured its responses both to intact radiographs and radiographs that had been divided into 25 square pieces whose locations were scrambled. Activity in the FFA was equal in magnitude for intact and scrambled images, and responses to both kinds of stimuli correlated reliably with expertise. These results suggest that the FFA is one of the cortical regions that provides the basis of expertise in radiology, but that its contribution is not holistic processing of images.

  19. Real-time microstructural and functional imaging and image processing in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Westphal, Volker

    Optical Coherence Tomography (OCT) is a noninvasive optical imaging technique that allows high-resolution cross-sectional imaging of tissue microstructure, achieving a spatial resolution of about 10 mum. OCT is similar to B-mode ultrasound (US) except that it uses infrared light instead of ultrasound. In contrast to US, no coupling gel is needed, simplifying the image acquisition. Furthermore, the fiber optic implementation of OCT is compatible with endoscopes. In recent years, the transition from slow imaging, bench-top systems to real-time clinical systems has been under way. This has lead to a variety of applications, namely in ophthalmology, gastroenterology, dermatology and cardiology. First, this dissertation will demonstrate that OCT is capable of imaging and differentiating clinically relevant tissue structures in the gastrointestinal tract. A careful in vitro correlation study between endoscopic OCT images and corresponding histological slides was performed. Besides structural imaging, OCT systems were further developed for functional imaging, as for example to visualize blood flow. Previously, imaging flow in small vessels in real-time was not possible. For this research, a new processing scheme similar to real-time Doppler in US was introduced. It was implemented in dedicated hardware to allow real-time acquisition and overlayed display of blood flow in vivo. A sensitivity of 0.5mm/s was achieved. Optical coherence microscopy (OCM) is a variation of OCT, improving the resolution even further to a few micrometers. Advances made in the OCT scan engine for the Doppler setup enabled real-time imaging in vivo with OCM. In order to generate geometrical correct images for all the previous applications in real-time, extensive image processing algorithms were developed. Algorithms for correction of distortions due to non-telecentric scanning, nonlinear scan mirror movements, and refraction were developed and demonstrated. This has led to interesting new applications, as for example in imaging of the anterior segment of the eye.

  20. In-vivo optical molecular imaging for laser hyperthermia

    NASA Astrophysics Data System (ADS)

    Zeng, Shaoqun; Zhang, Zhihong; Zhou, Wei; Luo, Qingming

    2002-04-01

    Green fluorescent protein (GFP) transfected Hela cell was planted in naked mice, to construct an in vivo model for monitoring the therapeutic effect of laser hyperthermia in real time. A cooled CCD fluorescence imaging system was used to record the tumor fluorescence image during the hyperthermia process. Primary experimental results were presented in this paper. To make sure the fluorescent probe GFP does not have strong effect on the biologic function of the host tumor cell (Hela cell), several conventional biological processes were observed in real time. First, neurons, which are much more tender than tumor cells, were transfected with GFP (cameleons). No morphological inhomogenities were observed, and normal functional responses of the neurons were observed when stimulated with histamine. In the second step, the mitosis process of cultured Hela cell was monitored. The features observed during mitosis confirmed that the transfection does not ruin the mitosis process of the tumor cell. At last, naked mice with tumor cell was constructed, which emit fluorescence in the tumor region when excited with faint laser. This presentation provides an in vivo biological model for quick monitoring of the therapeutic results of tumor hyperthermia.

  1. Recovering the fine structures in solar images

    NASA Technical Reports Server (NTRS)

    Karovska, Margarita; Habbal, S. R.; Golub, L.; Deluca, E.; Hudson, Hugh S.

    1994-01-01

    Several examples of the capability of the blind iterative deconvolution (BID) technique to recover the real point spread function, when limited a priori information is available about its characteristics. To demonstrate the potential of image post-processing for probing the fine scale and temporal variability of the solar atmosphere, the BID technique is applied to different samples of solar observations from space. The BID technique was originally proposed for correction of the effects of atmospheric turbulence on optical images. The processed images provide a detailed view of the spatial structure of the solar atmosphere at different heights in regions with different large-scale magnetic field structures.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pantelis, Evaggelos, E-mail: vpantelis@phys.uoa.g; Medical Physics Laboratory, Medical School, University of Athens, Athens; Papadakis, Nikolaos

    Purpose: To study the efficacy of the integration of functional magnetic resonance imaging (fMRI) and diffusion tensor imaging tractography data into stereotactic radiosurgery clinical practice. Methods and Materials: fMRI and tractography data sets were acquired and fused with corresponding anatomical MR and computed tomography images of patients with arteriovenous malformation (AVM), astrocytoma, brain metastasis, or hemangioma and referred for stereotactic radiosurgery. The acquired data sets were imported into a CyberKnife stereotactic radiosurgery system and used to delineate the target, organs at risk, and nearby functional structures and fiber tracts. Treatment plans with and without the incorporation of the functional structuresmore » and the fiber tracts into the optimization process were developed and compared. Results: The nearby functional structures and fiber tracts could receive doses of >50% of the maximum dose if they were excluded from the planning process. In the AVM case, the doses received by the Broadmann-17 structure and the optic tract were reduced to 700 cGy from 1,400 cGy and to 1,200 cGy from 2,000 cGy, respectively, upon inclusion into the optimization process. In the metastasis case, the motor cortex received 850 cGy instead of 1,400 cGy; and in the hemangioma case, the pyramidal tracts received 780 cGy instead of 990 cGy. In the astrocytoma case, the dose to the motor cortex bordering the lesion was reduced to 1,900 cGy from 2,100 cGy, and therefore, the biologically equivalent dose in three fractions was delivered instead. Conclusions: Functional structures and fiber tracts could receive high doses if they were not considered during treatment planning. With the aid of fMRI and tractography images, they can be delineated and spared.« less

  3. Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth

    NASA Astrophysics Data System (ADS)

    Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana

    2017-10-01

    In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.

  4. Processing electronic photos of Mercury produced by ground based observation

    NASA Astrophysics Data System (ADS)

    Ksanfomality, Leonid

    New images of Mercury have been obtained by processing of ground based observations that were carried out using the short exposure technique. The disk of the planet extendeds usually from 6 to 7 arc seconds, with the linear size of the image in a focal plane of the telescope about 0.3-0.5 mm on the average. Processing initial millisecond electronic photos of the planet is very labour-consuming. Some features of processing of initial millisecond electronic photos by methods of correlation stacking were considered in (Ksanfomality et al., 2005; Ksanfomality and Sprague, 2007). The method uses manual selection of good photos including a so-called pilot- file, the search for which usually must be done manually. The pilot-file is the most successful one, in opinion of the operator. It defines the future result of the stacking. To change pilot-files increases the labor of processing many times. Programs of processing analyze the contents of a sample, find in it any details, and search for recurrence of these almost imperceptible details in thousand of other stacking electronic pictures. If, proceeding from experience, the form and position of a pilot-file still can be estimated, the estimation of a reality of barely distinct details in it is somewhere in between the imaging and imagination. In 2006-07 some programs of automatic processing have been created. Unfortunately, the efficiency of all automatic programs is not as good as manual selection. Together with the selection, some other known methods are used. The point spread function (PSF) is described by a known mathematical function which in its central part decreases smoothly from the center. Usually the width of this function is accepted at a level 0.7 or 0.5 of the maxima. If many thousands of initial electronic pictures are acquired, it is possible during their processing to take advantage of known statistics of random variables and to choose the width of the function at a level, say, 0.9 maxima. Then the resolution of the image improves appreciably. The essential element of processing is the mathematical model of unsharp mask. But this is a two-edged instrument. The result depends on a choice of the size of the mask. If the size is too small, all low spatial frequencies will be lost, and the image becomes grey uniformly; on the contrary, if the size of the unsharp mask is too great, all fine details disappear. In some cases the compromise in selection of parameters of the unsharp mask becomes critical.

  5. Research for Key Techniques of Geophysical Recognition System of Hydrocarbon-induced Magnetic Anomalies Based on Hydrocarbon Seepage Theory

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Hao, T.; Zhao, B.

    2009-12-01

    Hydrocarbon seepage effects can cause magnetic alteration zones in near surface, and the magnetic anomalies induced by the alteration zones can thus be used to locate oil-gas potential regions. In order to reduce the inaccuracy and multi-resolution of the hydrocarbon anomalies recognized only by magnetic data, and to meet the requirement of integrated management and sythetic analysis of multi-source geoscientfic data, it is necessary to construct a recognition system that integrates the functions of data management, real-time processing, synthetic evaluation, and geologic mapping. In this paper research for the key techniques of the system is discussed. Image processing methods can be applied to potential field images so as to make it easier for visual interpretation and geological understanding. For gravity or magnetic images, the anomalies with identical frequency-domain characteristics but different spatial distribution will reflect differently in texture and relevant textural statistics. Texture is a description of structural arrangements and spatial variation of a dataset or an image, and has been applied in many research fields. Textural analysis is a procedure that extracts textural features by image processing methods and thus obtains a quantitative or qualitative description of texture. When the two kinds of anomalies have no distinct difference in amplitude or overlap in frequency spectrum, they may be distinguishable due to their texture, which can be considered as textural contrast. Therefore, for the recognition system we propose a new “magnetic spots” recognition method based on image processing techniques. The method can be divided into 3 major steps: firstly, separate local anomalies caused by shallow, relatively small sources from the total magnetic field, and then pre-process the local magnetic anomaly data by image processing methods such that magnetic anomalies can be expressed as points, lines and polygons with spatial correlation, which includes histogram-equalization based image display, object recognition and extraction; then, mine the spatial characteristics and correlations of the magnetic anomalies using textural statistics and analysis, and study the features of known anomalous objects (closures, hydrocarbon-bearing structures, igneous rocks, etc.) in the same research area; finally, classify the anomalies, cluster them according to their similarity, and predict hydrocarbon induced “magnetic spots” combined with geologic, drilling and rock core data. The system uses the ArcGIS as the secondary development platform, inherits the basic functions of the ArcGIS, and develops two main sepecial functional modules, the module for conventional potential-field data processing methods and the module for feature extraction and enhancement based on image processing and analysis techniques. The system can be applied to realize the geophysical detection and recognition of near-surface hydrocarbon seepage anomalies, provide technical support for locating oil-gas potential regions, and promote geophysical data processing and interpretation to advance more efficiently.

  6. Brain function differences in language processing in children and adults with autism.

    PubMed

    Williams, Diane L; Cherkassky, Vladimir L; Mason, Robert A; Keller, Timothy A; Minshew, Nancy J; Just, Marcel Adam

    2013-08-01

    Comparison of brain function between children and adults with autism provides an understanding of the effects of the disorder and associated maturational differences on language processing. Functional imaging (functional magnetic resonance imaging) was used to examine brain activation and cortical synchronization during the processing of literal and ironic texts in 15 children with autism, 14 children with typical development, 13 adults with autism, and 12 adult controls. Both the children and adults with autism had lower functional connectivity (synchronization of brain activity among activated areas) than their age and ability comparison group in the left hemisphere language network during irony processing, and neither autism group had an increase in functional connectivity in response to increased task demands. Activation differences for the literal and irony conditions occurred in key language-processing regions (left middle temporal, left pars triangularis, left pars opercularis, left medial frontal, and right middle temporal). The children and adults with autism differed from each other in the use of some brain regions during the irony task, with the adults with autism having activation levels similar to those of the control groups. Overall, the children and adults with autism differed from the adult and child controls in (a) the degree of network coordination, (b) the distribution of the workload among member nodes, and (3) the dynamic recruitment of regions in response to text content. Moreover, the differences between the two autism age groups may be indicative of positive changes in the neural function related to language processing associated with maturation and/or educational experience. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  7. Brain Function Differences in Language Processing in Children and Adults with Autism

    PubMed Central

    Williams, Diane L.; Cherkassky, Vladimir L.; Mason, Robert A.; Keller, Timothy A.; Minshew, Nancy J.; Just, Marcel Adam

    2015-01-01

    Comparison of brain function between children and adults with autism provides an understanding of the effects of the disorder and associated maturational differences on language processing. Functional imaging (functional magnetic resonance imaging) was used to examine brain activation and cortical synchronization during the processing of literal and ironic texts in 15 children with autism, 14 children with typical development, 13 adults with autism, and 12 adult controls. Both the children and adults with autism had lower functional connectivity (synchronization of brain activity among activated areas) than their age and ability comparison group in the left hemisphere language network during irony processing, and neither autism group had an increase in functional connectivity in response to increased task demands. Activation differences for the literal and irony conditions occurred in key language-processing regions (left middle temporal, left pars triangularis, left pars opercularis, left medial frontal, and right middle temporal). The children and adults with autism differed from each other in the use of some brain regions during the irony task, with the adults with autism having activation levels similar to those of the control groups. Overall, the children and adults with autism differed from the adult and child controls in (a) the degree of network coordination, (b) the distribution of the workload among member nodes, and (3) the dynamic recruitment of regions in response to text content. Moreover, the differences between the two autism age groups may be indicative of positive changes in the neural function related to language processing associated with maturation and/or educational experience. PMID:23495230

  8. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  9. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  10. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  11. Normalization of T2W-MRI prostate images using Rician a priori

    NASA Astrophysics Data System (ADS)

    Lemaître, Guillaume; Rastgoo, Mojdeh; Massich, Joan; Vilanova, Joan C.; Walker, Paul M.; Freixenet, Jordi; Meyer-Baese, Anke; Mériaudeau, Fabrice; Martí, Robert

    2016-03-01

    Prostate cancer is reported to be the second most frequently diagnosed cancer of men in the world. In practise, diagnosis can be affected by multiple factors which reduces the chance to detect the potential lesions. In the last decades, new imaging techniques mainly based on MRI are developed in conjunction with Computer-Aided Diagnosis (CAD) systems to help radiologists for such diagnosis. CAD systems are usually designed as a sequential process consisting of four stages: pre-processing, segmentation, registration and classification. As a pre-processing, image normalization is a critical and important step of the chain in order to design a robust classifier and overcome the inter-patients intensity variations. However, little attention has been dedicated to the normalization of T2W-Magnetic Resonance Imaging (MRI) prostate images. In this paper, we propose two methods to normalize T2W-MRI prostate images: (i) based on a Rician a priori and (ii) based on a Square-Root Slope Function (SRSF) representation which does not make any assumption regarding the Probability Density Function (PDF) of the data. A comparison with the state-of-the-art methods is also provided. The normalization of the data is assessed by comparing the alignment of the patient PDFs in both qualitative and quantitative manners. In both evaluation, the normalization using Rician a priori outperforms the other state-of-the-art methods.

  12. Functional segregation of the inferior frontal gyrus for syntactic processes: a functional magnetic-resonance imaging study.

    PubMed

    Uchiyama, Yuji; Toyoda, Hiroshi; Honda, Manabu; Yoshida, Haruyo; Kochiyama, Takanori; Ebe, Kazutoshi; Sadato, Norihiro

    2008-07-01

    We used functional magnetic resonance imaging in 18 normal volunteers to determine whether there is separate representation of syntactic, semantic, and verbal working memory processing in the left inferior frontal gyrus (GFi). We compared a sentence comprehension task with a short-term memory maintenance task to identify syntactic and semantic processing regions. To investigate the effects of syntactic and verbal working memory load while minimizing the differences in semantic processes, we used comprehension tasks with garden-path (GP) sentences, which require re-parsing, and non-garden-path (NGP) sentences. Compared with the short-term memory task, sentence comprehension activated the left GFi, including Brodmann areas (BAs) 44, 45, and 47, and the left superior temporal gyrus. In GP versus NGP sentences, there was greater activity in the left BAs 44, 45, and 46 extending to the left anterior insula, the pre-supplementary motor area, and the right cerebellum. In the left GFi, verbal working memory activity was located more dorsally (BA 44/45), semantic processing was located more ventrally (BA 47), and syntactic processing was located in between (BA 45). These findings indicate a close relationship between semantic and syntactic processes, and suggest that BA 45 might link verbal working memory and semantic processing via syntactic unification processes.

  13. Parallel algorithm of real-time infrared image restoration based on total variation theory

    NASA Astrophysics Data System (ADS)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  14. Pc-Based Floating Point Imaging Workstation

    NASA Astrophysics Data System (ADS)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  15. Regionally adaptive histogram equalization of the chest.

    PubMed

    Sherrier, R H; Johnson, G A

    1987-01-01

    Advances in the area of digital chest radiography have resulted in the acquisition of high-quality images of the human chest. With these advances, there arises a genuine need for image processing algorithms specific to the chest, in order to fully exploit this digital technology. We have implemented the well-known technique of histogram equalization, noting the problems encountered when it is adapted to chest images. These problems have been successfully solved with our regionally adaptive histogram equalization method. With this technique histograms are calculated locally and then modified according to both the mean pixel value of that region as well as certain characteristics of the cumulative distribution function. This process, which has allowed certain regions of the chest radiograph to be enhanced differentially, may also have broader implications for other image processing tasks.

  16. Advances in two photon scanning and scanless microscopy technologies for functional neural circuit imaging.

    PubMed

    Schultz, Simon R; Copeland, Caroline S; Foust, Amanda J; Quicke, Peter; Schuck, Renaud

    2017-01-01

    Recent years have seen substantial developments in technology for imaging neural circuits, raising the prospect of large scale imaging studies of neural populations involved in information processing, with the potential to lead to step changes in our understanding of brain function and dysfunction. In this article we will review some key recent advances: improved fluorophores for single cell resolution functional neuroimaging using a two photon microscope; improved approaches to the problem of scanning active circuits; and the prospect of scanless microscopes which overcome some of the bandwidth limitations of current imaging techniques. These advances in technology for experimental neuroscience have in themselves led to technical challenges, such as the need for the development of novel signal processing and data analysis tools in order to make the most of the new experimental tools. We review recent work in some active topics, such as region of interest segmentation algorithms capable of demixing overlapping signals, and new highly accurate algorithms for calcium transient detection. These advances motivate the development of new data analysis tools capable of dealing with spatial or spatiotemporal patterns of neural activity, that scale well with pattern size.

  17. Advances in two photon scanning and scanless microscopy technologies for functional neural circuit imaging

    PubMed Central

    Schultz, Simon R.; Copeland, Caroline S.; Foust, Amanda J.; Quicke, Peter; Schuck, Renaud

    2017-01-01

    Recent years have seen substantial developments in technology for imaging neural circuits, raising the prospect of large scale imaging studies of neural populations involved in information processing, with the potential to lead to step changes in our understanding of brain function and dysfunction. In this article we will review some key recent advances: improved fluorophores for single cell resolution functional neuroimaging using a two photon microscope; improved approaches to the problem of scanning active circuits; and the prospect of scanless microscopes which overcome some of the bandwidth limitations of current imaging techniques. These advances in technology for experimental neuroscience have in themselves led to technical challenges, such as the need for the development of novel signal processing and data analysis tools in order to make the most of the new experimental tools. We review recent work in some active topics, such as region of interest segmentation algorithms capable of demixing overlapping signals, and new highly accurate algorithms for calcium transient detection. These advances motivate the development of new data analysis tools capable of dealing with spatial or spatiotemporal patterns of neural activity, that scale well with pattern size. PMID:28757657

  18. Analyzing Structure and Function of Vascularization in Engineered Bone Tissue by Video-Rate Intravital Microscopy and 3D Image Processing.

    PubMed

    Pang, Yonggang; Tsigkou, Olga; Spencer, Joel A; Lin, Charles P; Neville, Craig; Grottkau, Brian

    2015-10-01

    Vascularization is a key challenge in tissue engineering. Three-dimensional structure and microcirculation are two fundamental parameters for evaluating vascularization. Microscopic techniques with cellular level resolution, fast continuous observation, and robust 3D postimage processing are essential for evaluation, but have not been applied previously because of technical difficulties. In this study, we report novel video-rate confocal microscopy and 3D postimage processing techniques to accomplish this goal. In an immune-deficient mouse model, vascularized bone tissue was successfully engineered using human bone marrow mesenchymal stem cells (hMSCs) and human umbilical vein endothelial cells (HUVECs) in a poly (D,L-lactide-co-glycolide) (PLGA) scaffold. Video-rate (30 FPS) intravital confocal microscopy was applied in vitro and in vivo to visualize the vascular structure in the engineered bone and the microcirculation of the blood cells. Postimage processing was applied to perform 3D image reconstruction, by analyzing microvascular networks and calculating blood cell viscosity. The 3D volume reconstructed images show that the hMSCs served as pericytes stabilizing the microvascular network formed by HUVECs. Using orthogonal imaging reconstruction and transparency adjustment, both the vessel structure and blood cells within the vessel lumen were visualized. Network length, network intersections, and intersection densities were successfully computed using our custom-developed software. Viscosity analysis of the blood cells provided functional evaluation of the microcirculation. These results show that by 8 weeks, the blood vessels in peripheral areas function quite similarly to the host vessels. However, the viscosity drops about fourfold where it is only 0.8 mm away from the host. In summary, we developed novel techniques combining intravital microscopy and 3D image processing to analyze the vascularization in engineered bone. These techniques have broad applicability for evaluating vascularization in other engineered tissues as well.

  19. Optimization of wavefront coding imaging system using heuristic algorithms

    NASA Astrophysics Data System (ADS)

    González-Amador, E.; Padilla-Vivanco, A.; Toxqui-Quitl, C.; Zermeño-Loreto, O.

    2017-08-01

    Wavefront Coding (WFC) systems make use of an aspheric Phase-Mask (PM) and digital image processing to extend the Depth of Field (EDoF) of computational imaging systems. For years, several kinds of PM have been designed to produce a point spread function (PSF) near defocus-invariant. In this paper, the optimization of the phase deviation parameter is done by means of genetic algorithms (GAs). In this, the merit function minimizes the mean square error (MSE) between the diffraction limited Modulated Transfer Function (MTF) and the MTF of the system that is wavefront coded with different misfocus. WFC systems were simulated using the cubic, trefoil, and 4 Zernike polynomials phase-masks. Numerical results show defocus invariance aberration in all cases. Nevertheless, the best results are obtained by using the trefoil phase-mask, because the decoded image is almost free of artifacts.

  20. Functional magnetic resonance imaging in oncology: state of the art*

    PubMed Central

    Guimaraes, Marcos Duarte; Schuch, Alice; Hochhegger, Bruno; Gross, Jefferson Luiz; Chojniak, Rubens; Marchiori, Edson

    2014-01-01

    In the investigation of tumors with conventional magnetic resonance imaging, both quantitative characteristics, such as size, edema, necrosis, and presence of metastases, and qualitative characteristics, such as contrast enhancement degree, are taken into consideration. However, changes in cell metabolism and tissue physiology which precede morphological changes cannot be detected by the conventional technique. The development of new magnetic resonance imaging techniques has enabled the functional assessment of the structures in order to obtain information on the different physiological processes of the tumor microenvironment, such as oxygenation levels, cellularity and vascularity. The detailed morphological study in association with the new functional imaging techniques allows for an appropriate approach to cancer patients, including the phases of diagnosis, staging, response evaluation and follow-up, with a positive impact on their quality of life and survival rate. PMID:25741058

  1. Apparatus and Method for Assessing Vestibulo-Ocular Function

    NASA Technical Reports Server (NTRS)

    Shelhamer, Mark J. (Inventor)

    2015-01-01

    A system for assessing vestibulo-ocular function includes a motion sensor system adapted to be coupled to a user's head; a data processing system configured to communicate with the motion sensor system to receive the head-motion signals; a visual display system configured to communicate with the data processing system to receive image signals from the data processing system; and a gain control device arranged to be operated by the user and to communicate gain adjustment signals to the data processing system.

  2. Robust watermark technique using masking and Hermite transform.

    PubMed

    Coronel, Sandra L Gomez; Ramírez, Boris Escalante; Mosqueda, Marco A Acevedo

    2016-01-01

    The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.

  3. Graph theory for feature extraction and classification: a migraine pathology case study.

    PubMed

    Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya

    2014-01-01

    Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.

  4. TANGO: a generic tool for high-throughput 3D image analysis for studying nuclear organization.

    PubMed

    Ollion, Jean; Cochennec, Julien; Loll, François; Escudé, Christophe; Boudier, Thomas

    2013-07-15

    The cell nucleus is a highly organized cellular organelle that contains the genetic material. The study of nuclear architecture has become an important field of cellular biology. Extracting quantitative data from 3D fluorescence imaging helps understand the functions of different nuclear compartments. However, such approaches are limited by the requirement for processing and analyzing large sets of images. Here, we describe Tools for Analysis of Nuclear Genome Organization (TANGO), an image analysis tool dedicated to the study of nuclear architecture. TANGO is a coherent framework allowing biologists to perform the complete analysis process of 3D fluorescence images by combining two environments: ImageJ (http://imagej.nih.gov/ij/) for image processing and quantitative analysis and R (http://cran.r-project.org) for statistical processing of measurement results. It includes an intuitive user interface providing the means to precisely build a segmentation procedure and set-up analyses, without possessing programming skills. TANGO is a versatile tool able to process large sets of images, allowing quantitative study of nuclear organization. TANGO is composed of two programs: (i) an ImageJ plug-in and (ii) a package (rtango) for R. They are both free and open source, available (http://biophysique.mnhn.fr/tango) for Linux, Microsoft Windows and Macintosh OSX. Distribution is under the GPL v.2 licence. thomas.boudier@snv.jussieu.fr Supplementary data are available at Bioinformatics online.

  5. Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method

    PubMed Central

    Lu, Zhaolin

    2017-01-01

    Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis. PMID:28298925

  6. A double-sided microscope to realize whole-ganglion imaging of membrane potential in the medicinal leech

    PubMed Central

    Wagenaar, Daniel A

    2017-01-01

    Studies of neuronal network emergence during sensory processing and motor control are greatly facilitated by technologies that allow us to simultaneously record the membrane potential dynamics of a large population of neurons in single cell resolution. To achieve whole-brain recording with the ability to detect both small synaptic potentials and action potentials, we developed a voltage-sensitive dye (VSD) imaging technique based on a double-sided microscope that can image two sides of a nervous system simultaneously. We applied this system to the segmental ganglia of the medicinal leech. Double-sided VSD imaging enabled simultaneous recording of membrane potential events from almost all of the identifiable neurons. Using data obtained from double-sided VSD imaging, we analyzed neuronal dynamics in both sensory processing and generation of behavior and constructed functional maps for identification of neurons contributing to these processes. PMID:28944754

  7. A learning tool for optical and microwave satellite image processing and analysis

    NASA Astrophysics Data System (ADS)

    Dashondhi, Gaurav K.; Mohanty, Jyotirmoy; Eeti, Laxmi N.; Bhattacharya, Avik; De, Shaunak; Buddhiraju, Krishna M.

    2016-04-01

    This paper presents a self-learning tool, which contains a number of virtual experiments for processing and analysis of Optical/Infrared and Synthetic Aperture Radar (SAR) images. The tool is named Virtual Satellite Image Processing and Analysis Lab (v-SIPLAB) Experiments that are included in Learning Tool are related to: Optical/Infrared - Image and Edge enhancement, smoothing, PCT, vegetation indices, Mathematical Morphology, Accuracy Assessment, Supervised/Unsupervised classification etc.; Basic SAR - Parameter extraction and range spectrum estimation, Range compression, Doppler centroid estimation, Azimuth reference function generation and compression, Multilooking, image enhancement, texture analysis, edge and detection. etc.; SAR Interferometry - BaseLine Calculation, Extraction of single look SAR images, Registration, Resampling, and Interferogram generation; SAR Polarimetry - Conversion of AirSAR or Radarsat data to S2/C3/T3 matrix, Speckle Filtering, Power/Intensity image generation, Decomposition of S2/C3/T3, Classification of S2/C3/T3 using Wishart Classifier [3]. A professional quality polarimetric SAR software can be found at [8], a part of whose functionality can be found in our system. The learning tool also contains other modules, besides executable software experiments, such as aim, theory, procedure, interpretation, quizzes, link to additional reading material and user feedback. Students can have understanding of Optical and SAR remotely sensed images through discussion of basic principles and supported by structured procedure for running and interpreting the experiments. Quizzes for self-assessment and a provision for online feedback are also being provided to make this Learning tool self-contained. One can download results after performing experiments.

  8. Real-time image processing of TOF range images using a reconfigurable processor system

    NASA Astrophysics Data System (ADS)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  9. White blood cell segmentation by circle detection using electromagnetism-like optimization.

    PubMed

    Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.

  10. White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization

    PubMed Central

    Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713

  11. Translation-aware semantic segmentation via conditional least-square generative adversarial networks

    NASA Astrophysics Data System (ADS)

    Zhang, Mi; Hu, Xiangyun; Zhao, Like; Pang, Shiyan; Gong, Jinqi; Luo, Min

    2017-10-01

    Semantic segmentation has recently made rapid progress in the field of remote sensing and computer vision. However, many leading approaches cannot simultaneously translate label maps to possible source images with a limited number of training images. The core issue is insufficient adversarial information to interpret the inverse process and proper objective loss function to overcome the vanishing gradient problem. We propose the use of conditional least squares generative adversarial networks (CLS-GAN) to delineate visual objects and solve these problems. We trained the CLS-GAN network for semantic segmentation to discriminate dense prediction information either from training images or generative networks. We show that the optimal objective function of CLS-GAN is a special class of f-divergence and yields a generator that lies on the decision boundary of discriminator that reduces possible vanished gradient. We also demonstrate the effectiveness of the proposed architecture at translating images from label maps in the learning process. Experiments on a limited number of high resolution images, including close-range and remote sensing datasets, indicate that the proposed method leads to the improved semantic segmentation accuracy and can simultaneously generate high quality images from label maps.

  12. Intravital imaging of cardiac function at the single-cell level.

    PubMed

    Aguirre, Aaron D; Vinegoni, Claudio; Sebas, Matt; Weissleder, Ralph

    2014-08-05

    Knowledge of cardiomyocyte biology is limited by the lack of methods to interrogate single-cell physiology in vivo. Here we show that contracting myocytes can indeed be imaged with optical microscopy at high temporal and spatial resolution in the beating murine heart, allowing visualization of individual sarcomeres and measurement of the single cardiomyocyte contractile cycle. Collectively, this has been enabled by efficient tissue stabilization, a prospective real-time cardiac gating approach, an image processing algorithm for motion-artifact-free imaging throughout the cardiac cycle, and a fluorescent membrane staining protocol. Quantification of cardiomyocyte contractile function in vivo opens many possibilities for investigating myocardial disease and therapeutic intervention at the cellular level.

  13. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  14. Anthropometric body measurements based on multi-view stereo image reconstruction.

    PubMed

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  15. Anthropometric Body Measurements Based on Multi-View Stereo Image Reconstruction*

    PubMed Central

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting automatic anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of proposed system. PMID:24109700

  16. Structural and Functional Biomedical Imaging Using Polarization-Based Optical Coherence Tomography

    NASA Astrophysics Data System (ADS)

    Black, Adam J.

    Biomedical imaging has had an enormous impact in medicine and research. There are numerous imaging modalities covering a large range of spatial and temporal scales, penetration depths, along with indicators for function and disease. As these imaging technologies mature, the quality of the images they produce increases to resolve finer details with greater contrast at higher speeds which aids in a faster, more accurate diagnosis in the clinic. In this dissertation, polarization-based optical coherence tomography (OCT) systems are used and developed to image biological structure and function with greater speeds, signal-to-noise (SNR) and stability. OCT can image with spatial and temporal resolutions in the micro range. When imaging any sample, feedback is very important to verify the fidelity and desired location on the sample being imaged. To increase frame rates for display as well as data throughput, field-programmable gate arrays (FPGAs) were used with custom algorithms to realize real-time display and streaming output for continuous acquisition of large datasets of swept-source OCT systems. For spectral domain (SD) OCT systems, significant increases in signal-to-noise ratios were achieved from a custom balanced detection (BD) OCT system. The BD system doubled measured signals while reducing common term. For functional imaging, a real-time directed scanner was introduced to visualize the 3D image of a sample to identify regions of interest prior to recording. Elucidating the characteristics of functional OCT signals with the aid of simulations, novel processing methods were also developed to stabilize samples being imaged and identify possible origins of functional signals being measured. Polarization-sensitive OCT was used to image cardiac tissue before and after clearing to identify the regions of vascular perfusion from a coronary artery. The resulting 3D image provides a visualization of the perfusion boundaries for the tissue that would be damaged from a myocardial infarction to possibly identity features that lead to fatal cardiac arrhythmias. 3D functional imaging was used to measure functional retinal activity from a light stimulus. In some cases, single trial responses were possible; measured at the outer segment of the photoreceptor layer. The morphology and time-course of these signals are similar to the intrinsic optical signals reported from phototransduction. Assessing function in the retina could aid in early detection of degenerative diseases of the retina, such as glaucoma and macular degeneration.

  17. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  18. Architectures for single-chip image computing

    NASA Astrophysics Data System (ADS)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  19. A new level set model for cell image segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  20. Imaging model for the scintillator and its application to digital radiography image enhancement.

    PubMed

    Wang, Qian; Zhu, Yining; Li, Hongwei

    2015-12-28

    Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.

  1. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  2. A fast image registration approach of neural activities in light-sheet fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Meng, Hui; Hui, Hui; Hu, Chaoen; Yang, Xin; Tian, Jie

    2017-03-01

    The ability of fast and single-neuron resolution imaging of neural activities enables light-sheet fluorescence microscopy (LSFM) as a powerful imaging technique in functional neural connection applications. The state-of-art LSFM imaging system can record the neuronal activities of entire brain for small animal, such as zebrafish or C. elegans at single-neuron resolution. However, the stimulated and spontaneous movements in animal brain result in inconsistent neuron positions during recording process. It is time consuming to register the acquired large-scale images with conventional method. In this work, we address the problem of fast registration of neural positions in stacks of LSFM images. This is necessary to register brain structures and activities. To achieve fast registration of neural activities, we present a rigid registration architecture by implementation of Graphics Processing Unit (GPU). In this approach, the image stacks were preprocessed on GPU by mean stretching to reduce the computation effort. The present image was registered to the previous image stack that considered as reference. A fast Fourier transform (FFT) algorithm was used for calculating the shift of the image stack. The calculations for image registration were performed in different threads while the preparation functionality was refactored and called only once by the master thread. We implemented our registration algorithm on NVIDIA Quadro K4200 GPU under Compute Unified Device Architecture (CUDA) programming environment. The experimental results showed that the registration computation can speed-up to 550ms for a full high-resolution brain image. Our approach also has potential to be used for other dynamic image registrations in biomedical applications.

  3. Functional connectivity in autosomal dominant and late-onset Alzheimer disease.

    PubMed

    Thomas, Jewell B; Brier, Matthew R; Bateman, Randall J; Snyder, Abraham Z; Benzinger, Tammie L; Xiong, Chengjie; Raichle, Marcus; Holtzman, David M; Sperling, Reisa A; Mayeux, Richard; Ghetti, Bernardino; Ringman, John M; Salloway, Stephen; McDade, Eric; Rossor, Martin N; Ourselin, Sebastien; Schofield, Peter R; Masters, Colin L; Martins, Ralph N; Weiner, Michael W; Thompson, Paul M; Fox, Nick C; Koeppe, Robert A; Jack, Clifford R; Mathis, Chester A; Oliver, Angela; Blazey, Tyler M; Moulder, Krista; Buckles, Virginia; Hornbeck, Russ; Chhatwal, Jasmeer; Schultz, Aaron P; Goate, Alison M; Fagan, Anne M; Cairns, Nigel J; Marcus, Daniel S; Morris, John C; Ances, Beau M

    2014-09-01

    Autosomal dominant Alzheimer disease (ADAD) is caused by rare genetic mutations in 3 specific genes in contrast to late-onset Alzheimer disease (LOAD), which has a more polygenetic risk profile. To assess the similarities and differences in functional connectivity changes owing to ADAD and LOAD. We analyzed functional connectivity in multiple brain resting state networks (RSNs) in a cross-sectional cohort of participants with ADAD (n = 79) and LOAD (n = 444), using resting-state functional connectivity magnetic resonance imaging at multiple international academic sites. For both types of AD, we quantified and compared functional connectivity changes in RSNs as a function of dementia severity measured by the Clinical Dementia Rating Scale. In ADAD, we qualitatively investigated functional connectivity changes with respect to estimated years from onset of symptoms within 5 RSNs. A decrease in functional connectivity with increasing Clinical Dementia Rating scores were similar for both LOAD and ADAD in multiple RSNs. Ordinal logistic regression models constructed in one type of Alzheimer disease accurately predicted clinical dementia rating scores in the other, further demonstrating the similarity of functional connectivity loss in each disease type. Among participants with ADAD, functional connectivity in multiple RSNs appeared qualitatively lower in asymptomatic mutation carriers near their anticipated age of symptom onset compared with asymptomatic mutation noncarriers. Resting-state functional connectivity magnetic resonance imaging changes with progressing AD severity are similar between ADAD and LOAD. Resting-state functional connectivity magnetic resonance imaging may be a useful end point for LOAD and ADAD therapy trials. Moreover, the disease process of ADAD may be an effective model for the LOAD disease process.

  4. Functional Brain Activation Differences in School-Age Children with Speech Sound Errors: Speech and Print Processing

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Felsenfeld, Susan; Frost, Stephen J.; Mencl, W. Einar; Fulbright, Robert K.; Grigorenko, Elena L.; Landi, Nicole; Seki, Ayumi; Pugh, Kenneth R.

    2012-01-01

    Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8;6[years;months] through 10;10, with 17 matched controls. Results: When…

  5. Real-Time Implementation of Nonlinear Processing Functions.

    DTIC Science & Technology

    1981-08-01

    crystal devices and then to use them in a coherent optical data- processing apparatus using halftone masks custom designed at the University oi Southern...California. With the halftone mask technique, we have demonstrated logarithmic nonlinear transformation, allowing us to separate multiplicative images...improved.,_ This device allowed nonlinear functions to be implemented directly wit - out the need for specially made halftone masks. Besides

  6. Imaging quality analysis of multi-channel scanning radiometer

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Xu, Wujun; Wang, Chengliang

    2008-03-01

    Multi-channel scanning radiometer, on boarding FY-2 geostationary meteorological satellite, plays a key role in remote sensing because of its wide field of view and continuous multi-spectral images acquirements. It is significant to evaluate image quality after performance parameters of the imaging system are validated. Several methods of evaluating imaging quality are discussed. Of these methods, the most fundamental is the MTF. The MTF of photoelectric scanning remote instrument, in the scanning direction, is the multiplication of optics transfer function (OTF), detector transfer function (DTF) and electronics transfer function (ETF). For image motion compensation, moving speed of scanning mirror should be considered. The optical MTF measurement is performed in both the EAST/WEST and NORTH/SOUTH direction, whose values are used for alignment purposes and are used to determine the general health of the instrument during integration and testing. Imaging systems cannot perfectly reproduce what they see and end up "blurring" the image. Many parts of the imaging system can cause blurring. Among these are the optical elements, the sampling of the detector itself, post-processing, or the earth's atmosphere for systems that image through it. Through theory calculation and actual measurement, it is proved that DTF and ETF are the main factors of system MTF and the imaging quality can satisfy the requirement of instrument design.

  7. Executive-affective connectivity in smokers viewing anti-smoking images: an fMRI study.

    PubMed

    Dinh-Williams, Laurence; Mendrek, Adrianna; Dumais, Alexandre; Bourque, Josiane; Potvin, Stéphane

    2014-12-30

    Despite knowledge of the harmful consequences of smoking on health, tobacco users continue to smoke. Neuroimaging studies have begun to provide insight into the mechanisms underlying this response. Regions involved in executive control and affective processing/persuasion are activated when viewing the negative value of smoking, but these systems can interact in ways that promote or hinder its impact on behavior. The goal of this functional magnetic resonance imaging (fMRI) study was to examine the dynamics between these systems during the processing of images designed to elicit a negative emotional response regarding tobacco smoking in a group of current smokers. Thirty chronic smokers passively viewed aversive smoking-related, aversive nonsmoking-related and neutral images presented in a block design while being scanned. Functional connectivity analyses showed that the left inferior frontal gyrus (IFG) is negatively associated to activity in medial frontal, cingulate, limbic, subcortical and parietal regions in chronic smokers during the processing of aversive smoking-related material, a pattern that was significantly greater when stimuli were drug-related compared with when they were nondrug-related. Our results suggest that individuals with tobacco dependence present different patterns of functional connectivity depending on whether the aversive stimuli are smoking- or nonsmoking-related. Activity in the left inferior frontal gyrus may act to down-regulate corresponding activity in regions key to an affective and persuasive response during the processing of anti-smoking material. This mechanism may reduce the extent to which "feeling bad" brings about a change in behavior. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Translational MR Neuroimaging of Stroke and Recovery

    PubMed Central

    Mandeville, Emiri T.; Ayata, Cenk; Zheng, Yi; Mandeville, Joseph B.

    2016-01-01

    Multiparametric magnetic resonance imaging (MRI) has become a critical clinical tool for diagnosing focal ischemic stroke severity, staging treatment, and predicting outcome. Imaging during the acute phase focuses on tissue viability in the stroke vicinity, while imaging during recovery requires the evaluation of distributed structural and functional connectivity. Preclinical MRI of experimental stroke models provides validation of non-invasive biomarkers in terms of cellular and molecular mechanisms, while also providing a translational platform for evaluation of prospective therapies. This brief review of translational stroke imaging discusses the acute to chronic imaging transition, the principles underlying common MRI methods employed in stroke research, and experimental results obtained by clinical and preclinical imaging to determine tissue viability, vascular remodeling, structural connectivity of major white matter tracts, and functional connectivity using task-based and resting-state fMRI during the stroke recovery process. PMID:27578048

  9. Activation in mesolimbic and visuospatial neural circuits elicited by smoking cues: evidence from functional magnetic resonance imaging.

    PubMed

    Due, Deborah L; Huettel, Scott A; Hall, Warren G; Rubin, David C

    2002-06-01

    The authors sought to increase understanding of the brain mechanisms involved in cigarette addiction by identifying neural substrates modulated by visual smoking cues in nicotine-deprived smokers. Event-related functional magnetic resonance imaging (fMRI) was used to detect brain activation after exposure to smoking-related images in a group of nicotine-deprived smokers and a nonsmoking comparison group. Subjects viewed a pseudo-random sequence of smoking images, neutral nonsmoking images, and rare targets (photographs of animals). Subjects pressed a button whenever a rare target appeared. In smokers, the fMRI signal was greater after exposure to smoking-related images than after exposure to neutral images in mesolimbic dopamine reward circuits known to be activated by addictive drugs (right posterior amygdala, posterior hippocampus, ventral tegmental area, and medial thalamus) as well as in areas related to visuospatial attention (bilateral prefrontal and parietal cortex and right fusiform gyrus). In nonsmokers, no significant differences in fMRI signal following exposure to smoking-related and neutral images were detected. In most regions studied, both subject groups showed greater activation following presentation of rare target images than after exposure to neutral images. In nicotine-deprived smokers, both reward and attention circuits were activated by exposure to smoking-related images. Smoking cues are processed like rare targets in that they activate attentional regions. These cues are also processed like addictive drugs in that they activate mesolimbic reward regions.

  10. Functional Connectivity of the Precuneus in Female University Students with Long-Term Musical Training.

    PubMed

    Tanaka, Shoji; Kirino, Eiji

    2016-01-01

    Conceiving concrete mental imagery is critical for skillful musical expression and performance. The precuneus, a core component of the default mode network (DMN), is a hub of mental image processing that participates in functions such as episodic memory retrieval and imagining future events. The precuneus connects with many brain regions in the frontal, parietal, temporal, and occipital cortices. The aim of this study was to examine the effects of long-term musical training on the resting-state functional connectivity of the precuneus. Our hypothesis was that the functional connectivity of the precuneus is altered in musicians. We analyzed the functional connectivity of the precuneus using resting-state functional magnetic resonance imaging (fMRI) data recorded in female university students majoring in music and nonmusic disciplines. The results show that the music students had higher functional connectivity of the precuneus with opercular/insular regions, which are associated with interoceptive and emotional processing; Heschl's gyrus (HG) and the planum temporale (PT), which process complex tonal information; and the lateral occipital cortex (LOC), which processes visual information. Connectivity of the precuneus within the DMN did not differ between the two groups. Our finding suggests that functional connections between the precuneus and the regions outside of the DMN play an important role in musical performance. We propose that a neural network linking the precuneus with these regions contributes to translate mental imagery into information relevant to musical performance.

  11. Functional Connectivity of the Precuneus in Female University Students with Long-Term Musical Training

    PubMed Central

    Tanaka, Shoji; Kirino, Eiji

    2016-01-01

    Conceiving concrete mental imagery is critical for skillful musical expression and performance. The precuneus, a core component of the default mode network (DMN), is a hub of mental image processing that participates in functions such as episodic memory retrieval and imagining future events. The precuneus connects with many brain regions in the frontal, parietal, temporal, and occipital cortices. The aim of this study was to examine the effects of long-term musical training on the resting-state functional connectivity of the precuneus. Our hypothesis was that the functional connectivity of the precuneus is altered in musicians. We analyzed the functional connectivity of the precuneus using resting-state functional magnetic resonance imaging (fMRI) data recorded in female university students majoring in music and nonmusic disciplines. The results show that the music students had higher functional connectivity of the precuneus with opercular/insular regions, which are associated with interoceptive and emotional processing; Heschl’s gyrus (HG) and the planum temporale (PT), which process complex tonal information; and the lateral occipital cortex (LOC), which processes visual information. Connectivity of the precuneus within the DMN did not differ between the two groups. Our finding suggests that functional connections between the precuneus and the regions outside of the DMN play an important role in musical performance. We propose that a neural network linking the precuneus with these regions contributes to translate mental imagery into information relevant to musical performance. PMID:27445765

  12. Erotic stimulus processing under amisulpride and reboxetine: a placebo-controlled fMRI study in healthy subjects.

    PubMed

    Graf, Heiko; Wiegers, Maike; Metzger, Coraline D; Walter, Martin; Grön, Georg; Abler, Birgit

    2014-10-31

    Impaired sexual function is increasingly recognized as a side effect of psychopharmacological treatment. However, underlying mechanisms of action of the different drugs on sexual processing are still to be explored. Using functional magnetic resonance imaging, we previously investigated effects of serotonergic (paroxetine) and dopaminergic (bupropion) antidepressants on sexual functioning (Abler et al., 2011). Here, we studied the impact of noradrenergic and antidopaminergic medication on neural correlates of visual sexual stimulation in a new sample of subjects. Nineteen healthy heterosexual males (mean age 24 years, SD 3.1) under subchronic intake (7 days) of the noradrenergic agent reboxetine (4 mg/d), the antidopaminergic agent amisulpride (200mg/d), and placebo were included and studied with functional magnetic resonance imaging within a randomized, double-blind, placebo-controlled, within-subjects design during an established erotic video-clip task. Subjective sexual functioning was assessed using the Massachusetts General Hospital-Sexual Functioning Questionnaire. Relative to placebo, subjective sexual functioning was attenuated under reboxetine along with diminished neural activations within the caudate nucleus. Altered neural activations correlated with decreased sexual interest. Under amisulpride, neural activations and subjective sexual functioning remained unchanged. In line with previous interpretations of the role of the caudate nucleus in the context of primary reward processing, attenuated caudate activation may reflect detrimental effects on motivational aspects of erotic stimulus processing under noradrenergic agents. © The Author 2015. Published by Oxford University Press on behalf of CINP.

  13. Imaging insights into basal ganglia function, Parkinson’s disease, and dystonia

    PubMed Central

    Stoessl, A. Jon; Lehericy, Stephane; Strafella, Antonio P.

    2015-01-01

    Recent advances in structural and functional imaging have greatly improved our ability to assess normal functions of the basal ganglia, diagnose parkinsonian syndromes, understand the pathophysiology of parkinsonism and other movement disorders, and detect and monitor disease progression. Radionuclide imaging is the best way to detect and monitor dopamine deficiency, and will probably continue to be the best biomarker for assessment of the effects of disease-modifying therapies. However, advances in magnetic resonance enable the separation of patients with Parkinson’s disease from healthy controls, and show great promise for differentiation between Parkinson’s disease and other akinetic-rigid syndromes. Radionuclide imaging is useful to show the dopaminergic basis for both motor and behavioural complications of Parkinson’s disease and its treatment, and alterations in non-dopaminergic systems. Both PET and MRI can be used to study patterns of functional connectivity in the brain, which is disrupted in Parkinson’s disease and in association with its complications, and in other basal-ganglia disorders such as dystonia, in which an anatomical substrate is not otherwise apparent. Functional imaging is increasingly used to assess underlying pathological processes such as neuroinflammation and abnormal protein deposition. This imaging is another promising approach to assess the effects of treatments designed to slow disease progression. PMID:24954673

  14. Comparison of fMRI paradigms assessing visuospatial processing: Robustness and reproducibility

    PubMed Central

    Herholz, Peer; Zimmermann, Kristin M.; Westermann, Stefan; Frässle, Stefan; Jansen, Andreas

    2017-01-01

    The development of brain imaging techniques, in particular functional magnetic resonance imaging (fMRI), made it possible to non-invasively study the hemispheric lateralization of cognitive brain functions in large cohorts. Comprehensive models of hemispheric lateralization are, however, still missing and should not only account for the hemispheric specialization of individual brain functions, but also for the interactions among different lateralized cognitive processes (e.g., language and visuospatial processing). This calls for robust and reliable paradigms to study hemispheric lateralization for various cognitive functions. While numerous reliable imaging paradigms have been developed for language, which represents the most prominent left-lateralized brain function, the reliability of imaging paradigms investigating typically right-lateralized brain functions, such as visuospatial processing, has received comparatively less attention. In the present study, we aimed to establish an fMRI paradigm that robustly and reliably identifies right-hemispheric activation evoked by visuospatial processing in individual subjects. In a first study, we therefore compared three frequently used paradigms for assessing visuospatial processing and evaluated their utility to robustly detect right-lateralized brain activity on a single-subject level. In a second study, we then assessed the test-retest reliability of the so-called Landmark task–the paradigm that yielded the most robust results in study 1. At the single-voxel level, we found poor reliability of the brain activation underlying visuospatial attention. This suggests that poor signal-to-noise ratios can become a limiting factor for test-retest reliability. This represents a common detriment of fMRI paradigms investigating visuospatial attention in general and therefore highlights the need for careful considerations of both the possibilities and limitations of the respective fMRI paradigm–in particular, when being interested in effects at the single-voxel level. Notably, however, when focusing on the reliability of measures of hemispheric lateralization (which was the main goal of study 2), we show that hemispheric dominance (quantified by the lateralization index, LI, with |LI| >0.4) of the evoked activation could be robustly determined in more than 62% and, if considering only two categories (i.e., left, right), in more than 93% of our subjects. Furthermore, the reliability of the lateralization strength (LI) was “fair” to “good”. In conclusion, our results suggest that the degree of right-hemispheric dominance during visuospatial processing can be reliably determined using the Landmark task, both at the group and single-subject level, while at the same time stressing the need for future refinements of experimental paradigms and more sophisticated fMRI data acquisition techniques. PMID:29059201

  15. High data volume and transfer rate techniques used at NASA's image processing facility

    NASA Technical Reports Server (NTRS)

    Heffner, P.; Connell, E.; Mccaleb, F.

    1978-01-01

    Data storage and transfer operations at a new image processing facility are described. The equipment includes high density digital magnetic tape drives and specially designed controllers to provide an interface between the tape drives and computerized image processing systems. The controller performs the functions necessary to convert the continuous serial data stream from the tape drive to a word-parallel blocked data stream which then goes to the computer-based system. With regard to the tape packing density, 1.8 times 10 to the tenth data bits are stored on a reel of one-inch tape. System components and their operation are surveyed, and studies on advanced storage techniques are summarized.

  16. Anterior cingulate hyperactivations during negative emotion processing among men with schizophrenia and a history of violent behavior

    PubMed Central

    Tikàsz, Andràs; Potvin, Stéphane; Lungu, Ovidiu; Joyal, Christian C; Hodgins, Sheilagh; Mendrek, Adrianna; Dumais, Alexandre

    2016-01-01

    Background Evidence suggests a 2.1–4.6 times increase in the risk of violent behavior in schizophrenia compared to the general population. Current theories propose that the processing of negative emotions is defective in violent individuals and that dysfunctions within the neural circuits involved in emotion processing are implicated in violence. Although schizophrenia patients show enhanced sensitivity to negative stimuli, there are only few functional neuroimaging studies that have examined emotion processing among men with schizophrenia and a history of violence. Objective The present study aimed to identify the brain regions with greater neurofunctional alterations, as detected by functional magnetic resonance imaging during an emotion processing task, of men with schizophrenia who had engaged in violent behavior compared with those who had not. Methods Sixty men were studied; 20 with schizophrenia and a history of violence, 19 with schizophrenia and no violence, and 21 healthy men were scanned while viewing positive, negative, and neutral images. Results Negative images elicited hyperactivations in the anterior cingulate cortex (ACC), left and right lingual gyrus, and the left precentral gyrus in violent men with schizophrenia, compared to nonviolent men with schizophrenia and healthy men. Neutral images elicited hyperactivations in the right and left middle occipital gyrus, left lingual gyrus, and the left fusiform gyrus in violent men with schizophrenia, compared to the other two groups. Discussion Violent men with schizophrenia displayed specific increases in ACC in response to negative images. Given the role of the ACC in information integration, these results indicate a specific dysfunction in the processing of negative emotions that may trigger violent behavior in men with schizophrenia. PMID:27366072

  17. ASAP (Automatic Software for ASL Processing): A toolbox for processing Arterial Spin Labeling images.

    PubMed

    Mato Abad, Virginia; García-Polo, Pablo; O'Daly, Owen; Hernández-Tamames, Juan Antonio; Zelaya, Fernando

    2016-04-01

    The method of Arterial Spin Labeling (ASL) has experienced a significant rise in its application to functional imaging, since it is the only technique capable of measuring blood perfusion in a truly non-invasive manner. Currently, there are no commercial packages for processing ASL data and there is no recognized standard for normalizing ASL data to a common frame of reference. This work describes a new Automated Software for ASL Processing (ASAP) that can automatically process several ASL datasets. ASAP includes functions for all stages of image pre-processing: quantification, skull-stripping, co-registration, partial volume correction and normalization. To assess the applicability and validity of the toolbox, this work shows its application in the study of hypoperfusion in a sample of healthy subjects at risk of progressing to Alzheimer's disease. ASAP requires limited user intervention, minimizing the possibility of random and systematic errors, and produces cerebral blood flow maps that are ready for statistical group analysis. The software is easy to operate and results in excellent quality of spatial normalization. The results found in this evaluation study are consistent with previous studies that find decreased perfusion in Alzheimer's patients in similar regions and demonstrate the applicability of ASAP. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Anomalous visual experiences, negative symptoms, perceptual organization and the magnocellular pathway in schizophrenia: a shared construct?

    PubMed

    Kéri, Szabolcs; Kiss, Imre; Kelemen, Oguz; Benedek, György; Janka, Zoltán

    2005-10-01

    Schizophrenia is associated with impaired visual information processing. The aim of this study was to investigate the relationship between anomalous perceptual experiences, positive and negative symptoms, perceptual organization, rapid categorization of natural images and magnocellular (M) and parvocellular (P) visual pathway functioning. Thirty-five unmedicated patients with schizophrenia and 20 matched healthy control volunteers participated. Anomalous perceptual experiences were assessed with the Bonn Scale for the Assessment Basic Symptoms (BSABS). General intellectual functions were evaluated with the revised version of the Wechsler Adult Intelligence Scale. The 1-9 version of the Continuous Performance Test (CPT) was used to investigate sustained attention. The following psychophysical tests were used: detection of Gabor patches with collinear and orthogonal flankers (perceptual organization), categorization of briefly presented natural scenes (rapid visual processing), low-contrast and frequency-doubling vernier threshold (M pathway functioning), isoluminant colour vernier threshold and high spatial frequency discrimination (P pathway functioning). The patients with schizophrenia were impaired on test of perceptual organization, rapid visual processing and M pathway functioning. There was a significant correlation between BSABS scores, negative symptoms, perceptual organization, rapid visual processing and M pathway functioning. Positive symptoms, IQ, CPT and P pathway measures did not correlate with these parameters. The best predictor of the BSABS score was the perceptual organization deficit. These results raise the possibility that multiple facets of visual information processing deficits can be explained by M pathway dysfunctions in schizophrenia, resulting in impaired attentional modulation of perceptual organization and of natural image categorization.

  19. Dynamic chest radiography: flat-panel detector (FPD) based functional X-ray imaging.

    PubMed

    Tanaka, Rie

    2016-07-01

    Dynamic chest radiography is a flat-panel detector (FPD)-based functional X-ray imaging, which is performed as an additional examination in chest radiography. The large field of view (FOV) of FPDs permits real-time observation of the entire lungs and simultaneous right-and-left evaluation of diaphragm kinetics. Most importantly, dynamic chest radiography provides pulmonary ventilation and circulation findings as slight changes in pixel value even without the use of contrast media; the interpretation is challenging and crucial for a better understanding of pulmonary function. The basic concept was proposed in the 1980s; however, it was not realized until the 2010s because of technical limitations. Dynamic FPDs and advanced digital image processing played a key role for clinical application of dynamic chest radiography. Pulmonary ventilation and circulation can be quantified and visualized for the diagnosis of pulmonary diseases. Dynamic chest radiography can be deployed as a simple and rapid means of functional imaging in both routine and emergency medicine. Here, we focus on the evaluation of pulmonary ventilation and circulation. This review article describes the basic mechanism of imaging findings according to pulmonary/circulation physiology, followed by imaging procedures, analysis method, and diagnostic performance of dynamic chest radiography.

  20. UWGSP4: an imaging and graphics superworkstation and its medical applications

    NASA Astrophysics Data System (ADS)

    Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin

    1992-05-01

    UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.

  1. New technologies to investigate the brain-gut axis

    PubMed Central

    Sharma, Abhishek; Lelic, Dina; Brock, Christina; Paine, Peter; Aziz, Qasim

    2009-01-01

    Functional gastrointestinal disorders are commonly encountered in clinical practice, and pain is their commonest presenting symptom. In addition, patients with these disorders often demonstrate a heightened sensitivity to experimental visceral stimulation, termed visceral pain hypersensitivity that is likely to be important in their pathophysiology. Knowledge of how the brain processes sensory information from visceral structures is still in its infancy. However, our understanding has been propelled by technological imaging advances such as functional Magnetic Resonance Imaging, Positron Emission Tomography, Magnetoencephalography, and Electroencephalography (EEG). Numerous human studies have non-invasively demonstrated the complexity involved in functional pain processing, and highlighted a number of subcortical and cortical regions involved. This review will focus on the neurophysiological pathways (primary afferents, spinal and supraspinal transmission), brain-imaging techniques and the influence of endogenous and psychological processes in healthy controls and patients suffering from functional gastrointestinal disorders. Special attention will be paid to the newer EEG source analysis techniques. Understanding the phenotypic differences that determine an individual’s response to injurious stimuli could be the key to understanding why some patients develop pain and hyperalgesia in response to inflammation/injury while others do not. For future studies, an integrated approach is required incorporating an individual’s psychological, autonomic, neuroendocrine, neurophysiological, and genetic profile to define phenotypic traits that may be at greater risk of developing sensitised states in response to gut inflammation or injury. PMID:19132768

  2. Advances of Molecular Imaging for Monitoring the Anatomical and Functional Architecture of the Olfactory System.

    PubMed

    Zhang, Xintong; Bi, Anyao; Gao, Quansheng; Zhang, Shuai; Huang, Kunzhu; Liu, Zhiguo; Gao, Tang; Zeng, Wenbin

    2016-01-20

    The olfactory system of organisms serves as a genetically and anatomically model for studying how sensory input can be translated into behavior output. Some neurologic diseases are considered to be related to olfactory disturbance, especially Alzheimer's disease, Parkinson's disease, multiple sclerosis, and so forth. However, it is still unclear how the olfactory system affects disease generation processes and olfaction delivery processes. Molecular imaging, a modern multidisciplinary technology, can provide valid tools for the early detection and characterization of diseases, evaluation of treatment, and study of biological processes in living subjects, since molecular imaging applies specific molecular probes as a novel approach to produce special data to study biological processes in cellular and subcellular levels. Recently, molecular imaging plays a key role in studying the activation of olfactory system, thus it could help to prevent or delay some diseases. Herein, we present a comprehensive review on the research progress of the imaging probes for visualizing olfactory system, which is classified on different imaging modalities, including PET, MRI, and optical imaging. Additionally, the probes' design, sensing mechanism, and biological application are discussed. Finally, we provide an outlook for future studies in this field.

  3. A Multimodal Approach to Counselor Supervision.

    ERIC Educational Resources Information Center

    Ponterotto, Joseph G.; Zander, Toni A.

    1984-01-01

    Represents an initial effort to apply Lazarus's multimodal approach to a model of counselor supervision. Includes continuously monitoring the trainee's behavior, affect, sensations, images, cognitions, interpersonal functioning, and when appropriate, biological functioning (diet and drugs) in the supervisory process. (LLL)

  4. Imaging quality evaluation method of pixel coupled electro-optical imaging system

    NASA Astrophysics Data System (ADS)

    He, Xu; Yuan, Li; Jin, Chunqi; Zhang, Xiaohui

    2017-09-01

    With advancements in high-resolution imaging optical fiber bundle fabrication technology, traditional photoelectric imaging system have become ;flexible; with greatly reduced volume and weight. However, traditional image quality evaluation models are limited by the coupling discrete sampling effect of fiber-optic image bundles and charge-coupled device (CCD) pixels. This limitation substantially complicates the design, optimization, assembly, and evaluation image quality of the coupled discrete sampling imaging system. Based on the transfer process of grayscale cosine distribution optical signal in the fiber-optic image bundle and CCD, a mathematical model of coupled modulation transfer function (coupled-MTF) is established. This model can be used as a basis for following studies on the convergence and periodically oscillating characteristics of the function. We also propose the concept of the average coupled-MTF, which is consistent with the definition of traditional MTF. Based on this concept, the relationships among core distance, core layer radius, and average coupled-MTF are investigated.

  5. Loneliness in late-life depression: structural and functional connectivity during affective processing.

    PubMed

    Wong, N M L; Liu, H-L; Lin, C; Huang, C-M; Wai, Y-Y; Lee, S-H; Lee, T M C

    2016-09-01

    Late-life depression (LLD) in the elderly was reported to present with emotion dysregulation accompanied by high perceived loneliness. Previous research has suggested that LLD is a disorder of connectivity and is associated with aberrant network properties. On the other hand, perceived loneliness is found to adversely affect the brain, but little is known about its neurobiological basis in LLD. The current study investigated the relationships between the structural connectivity, functional connectivity during affective processing, and perceived loneliness in LLD. The current study included 54 participants aged >60 years of whom 31 were diagnosed with LLD. Diffusion tensor imaging (DTI) data and task-based functional magnetic resonance imaging (fMRI) data of an affective processing task were collected. Network-based statistics and graph theory techniques were applied, and the participants' perceived loneliness and depression level were measured. The affective processing task included viewing affective stimuli. Structurally, a loneliness-related sub-network was identified across all subjects. Functionally, perceived loneliness was related to connectivity differently in LLD than that in controls when they were processing negative stimuli, with aberrant networking in subcortical area. Perceived loneliness was identified to have a unique role in relation to the negative affective processing in LLD at the functional brain connectional and network levels. The findings increas our understanding of LLD and provide initial evidence of the neurobiological mechanisms of loneliness in LLD. Loneliness might be a potential intervention target in depressive patients.

  6. Effects of the BDNF Val66Met polymorphism on neural responses to facial emotion.

    PubMed

    Mukherjee, Prerona; Whalley, Heather C; McKirdy, James W; McIntosh, Andrew M; Johnstone, Eve C; Lawrie, Stephen M; Hall, Jeremy

    2011-03-31

    The brain derived neurotrophic factor (BDNF) Val66Met polymorphism has been associated with affective disorders, but its role in emotion processing has not been fully established. Due to the clinically heterogeneous nature of these disorders, studying the effect of genetic variation in the BDNF gene on a common attribute such as fear processing may elucidate how the BDNF Val66Met polymorphism impacts brain function. Here we use functional magnetic resonance imaging examine the effect of the BDNF Val66Met genotype on neural activity for fear processing. Forty healthy participants performed an implicit fear task during scanning, where subjects made gender judgments from facial images with neutral or fearful emotion. Subjects were tested for facial emotion recognition post-scan. Functional connectivity was investigated using psycho-physiological interactions. Subjects were genotyped for the BDNF Val66Met polymorphism and the measures compared between genotype groups. Met carriers showed overactivation in the anterior cingulate cortex (ACC), brainstem and insula bilaterally for fear processing, along with reduced functional connectivity from the ACC to the left hippocampus, and impaired fear recognition ability. The results show that during fear processing, Met allele carriers show an increased neural response in regions previously implicated in mediating autonomic arousal. Further, the Met carriers show decreased functional connectivity with the hippocampus, which may reflect differential retrieval of emotional associations. Together, these effects show significant differences in the neural substrate for fear processing with genetic variation in BDNF. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. Neuroimaging Techniques: a Conceptual Overview of Physical Principles, Contribution and History

    NASA Astrophysics Data System (ADS)

    Minati, Ludovico

    2006-06-01

    This paper is meant to provide a brief overview of the techniques currently used to image the brain and to study non-invasively its anatomy and function. After a historical summary in the first section, general aspects are outlined in the second section. The subsequent six sections survey, in order, computed tomography (CT), morphological magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), diffusion-tensor magnetic resonance imaging (DWI/DTI), positron emission tomography (PET), and electro- and magneto-encephalography (EEG/MEG) based imaging. Underlying physical principles, modelling and data processing approaches, as well as clinical and research relevance are briefly outlined for each technique. Given the breadth of the scope, there has been no attempt to be comprehensive. The ninth and final section outlines some aspects of active research in neuroimaging.

  8. Multiplexed 3D FRET imaging in deep tissue of live embryos

    PubMed Central

    Zhao, Ming; Wan, Xiaoyang; Li, Yu; Zhou, Weibin; Peng, Leilei

    2015-01-01

    Current deep tissue microscopy techniques are mostly restricted to intensity mapping of fluorophores, which significantly limit their applications in investigating biochemical processes in vivo. We present a deep tissue multiplexed functional imaging method that probes multiple Förster resonant energy transfer (FRET) sensors in live embryos with high spatial resolution. The method simultaneously images fluorescence lifetimes in 3D with multiple excitation lasers. Through quantitative analysis of triple-channel intensity and lifetime images, we demonstrated that Ca2+ and cAMP levels of live embryos expressing dual FRET sensors can be monitored simultaneously at microscopic resolution. The method is compatible with a broad range of FRET sensors currently available for probing various cellular biochemical functions. It opens the door to imaging complex cellular circuitries in whole live organisms. PMID:26387920

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minati, Ludovico

    This paper is meant to provide a brief overview of the techniques currently used to image the brain and to study non-invasively its anatomy and function. After a historical summary in the first section, general aspects are outlined in the second section. The subsequent six sections survey, in order, computed tomography (CT), morphological magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), diffusion-tensor magnetic resonance imaging (DWI/DTI), positron emission tomography (PET), and electro- and magneto-encephalography (EEG/MEG) based imaging. Underlying physical principles, modelling and data processing approaches, as well as clinical and research relevance are briefly outlined for each technique. Givenmore » the breadth of the scope, there has been no attempt to be comprehensive. The ninth and final section outlines some aspects of active research in neuroimaging.« less

  10. An automatic optimum kernel-size selection technique for edge enhancement

    USGS Publications Warehouse

    Chavez, Pat S.; Bauer, Brian P.

    1982-01-01

    Edge enhancement is a technique that can be considered, to a first order, a correction for the modulation transfer function of an imaging system. Digital imaging systems sample a continuous function at discrete intervals so that high-frequency information cannot be recorded at the same precision as lower frequency data. Because of this, fine detail or edge information in digital images is lost. Spatial filtering techniques can be used to enhance the fine detail information that does exist in the digital image, but the filter size is dependent on the type of area being processed. A technique has been developed by the authors that uses the horizontal first difference to automatically select the optimum kernel-size that should be used to enhance the edges that are contained in the image. 

  11. Registration of interferometric SAR images

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Vesecky, John F.; Zebker, Howard A.

    1992-01-01

    Interferometric synthetic aperture radar (INSAR) is a new way of performing topography mapping. Among the factors critical to mapping accuracy is the registration of the complex SAR images from repeated orbits. A new algorithm for registering interferometric SAR images is presented. A new figure of merit, the average fluctuation function of the phase difference image, is proposed to evaluate the fringe pattern quality. The process of adjusting the registration parameters according to the fringe pattern quality is optimized through a downhill simplex minimization algorithm. The results of applying the proposed algorithm to register two pairs of Seasat SAR images with a short baseline (75 m) and a long baseline (500 m) are shown. It is found that the average fluctuation function is a very stable measure of fringe pattern quality allowing very accurate registration.

  12. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    NASA Astrophysics Data System (ADS)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory Digital Image Processing, A Remote Sensing Perspective" authored by John Jensen. The textbook is widely adopted in the geography departments around the world for training students on digital processing of remote sensing images. In the traditional teaching setting for the course, the instructor prepares a set of sample remote sensing images to be used for the course. Commercial desktop remote sensing software, such as ERDAS, is used for students to do the lab exercises. The students have to do the excurses in the lab and can only use the simple images. For this specific course at GMU, we developed GeoBrain-based lab excurses for the course. With GeoBrain, students now can explore petabytes of remote sensing images in the NASA, NOAA, and USGS data archives instead of dealing only with sample images. Students have a much more powerful computing facility available for their lab excurses. They can explore the data and do the excurses any time at any place they want as long as they can access the Internet through the Web Browser. The feedbacks from students are all very positive about the learning experience on the digital image processing with the help of GeoBrain web processing services. The teaching/lab materials and GeoBrain services are freely available to anyone at http://www.laits.gmu.edu.

  13. Multimodal and Multi-tissue Measures of Connectivity Revealed by Joint Independent Component Analysis.

    PubMed

    Franco, Alexandre R; Ling, Josef; Caprihan, Arvind; Calhoun, Vince D; Jung, Rex E; Heileman, Gregory L; Mayer, Andrew R

    2008-12-01

    The human brain functions as an efficient system where signals arising from gray matter are transported via white matter tracts to other regions of the brain to facilitate human behavior. However, with a few exceptions, functional and structural neuroimaging data are typically optimized to maximize the quantification of signals arising from a single source. For example, functional magnetic resonance imaging (FMRI) is typically used as an index of gray matter functioning whereas diffusion tensor imaging (DTI) is typically used to determine white matter properties. While it is likely that these signals arising from different tissue sources contain complementary information, the signal processing algorithms necessary for the fusion of neuroimaging data across imaging modalities are still in a nascent stage. In the current paper we present a data-driven method for combining measures of functional connectivity arising from gray matter sources (FMRI resting state data) with different measures of white matter connectivity (DTI). Specifically, a joint independent component analysis (J-ICA) was used to combine these measures of functional connectivity following intensive signal processing and feature extraction within each of the individual modalities. Our results indicate that one of the most predominantly used measures of functional connectivity (activity in the default mode network) is highly dependent on the integrity of white matter connections between the two hemispheres (corpus callosum) and within the cingulate bundles. Importantly, the discovery of this complex relationship of connectivity was entirely facilitated by the signal processing and fusion techniques presented herein and could not have been revealed through separate analyses of both data types as is typically performed in the majority of neuroimaging experiments. We conclude by discussing future applications of this technique to other areas of neuroimaging and examining potential limitations of the methods.

  14. Methodology for Image-Based Reconstruction of Ventricular Geometry for Patient-Specific Modeling of Cardiac Electrophysiology

    PubMed Central

    Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.

    2014-01-01

    Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771

  15. Functional imaging of the brain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ell, P.J.; Jarritt, P.H.; Costa, D.C.

    1987-07-01

    The radionuclide tracer method is unique among all other imaging methodologies in its ability to trace organ or tissue function and metabolism. Physical processes such as electron or proton density assessment or resonance, edge identification, electrical or ultrasonic impedence, do not pertain to the image generation process in nuclear medicine, and if so, only in a rather secondary manner. The nuclear medicine imaging study is primarily a study of the chemical nature, distribution and interaction of the tracer/radiopharmaceutical utilized with the cellular system which requires investigation: the thyroid cells with sodium iodide, the recticular endothelial cells with colloidal particles, themore » adrenal medulla cells with metaiodobenzylguanidine, and so on. In the two most recent areas of nuclear medicine expansion, oncology (with labelled monoclonal antibodies) and neurology and psychiatry (with a whole new series of lipid soluble radiopharmaceuticals), specific cell systems can also be targeted and hence imaged and investigated. The study of structure as masterly performed by Virchow and all his successors over more than a century, is now definitely the prerogative of such imaging systems which excel with spatial and contrast resolution However the investigation of function and metabolism, has clearly passed from the laboratory animal protocol and experiment to the direct investigation in man, this being the achievement of the radionuclide tracer methodology. In this article, we review present interest and developments in that part of nuclear medicine activity which is aimed at the study of the neurological or psychiatric patient.« less

  16. Computer imaging and workflow systems in the business office.

    PubMed

    Adams, W T; Veale, F H; Helmick, P M

    1999-05-01

    Computer imaging and workflow technology automates many business processes that currently are performed using paper processes. Documents are scanned into the imaging system and placed in electronic patient account folders. Authorized users throughout the organization, including preadmission, verification, admission, billing, cash posting, customer service, and financial counseling staff, have online access to the information they need when they need it. Such streamlining of business functions can increase collections and customer satisfaction while reducing labor, supply, and storage costs. Because the costs of a comprehensive computer imaging and workflow system can be considerable, healthcare organizations should consider implementing parts of such systems that can be cost-justified or include implementation as part of a larger strategic technology initiative.

  17. Advances in functional brain imaging technology and developmental neuro-psychology: their applications in the Jungian analytic domain.

    PubMed

    Petchkovsky, Leon

    2017-06-01

    Analytical psychology shares with many other psychotherapies the important task of repairing the consequences of developmental trauma. The majority of analytic patients come from compromised early developmental backgrounds: they may have experienced neglect, abuse, or failures of empathic resonance from their carers. Functional brain imagery techniques including Quantitative Electroencephalogram (QEEG), and functional Magnetic Resonance Imagery (fMRI), allow us to track mental processes in ways beyond verbal reportage and introspection. This independent perspective is useful for developing new psychodynamic hypotheses, testing current ones, providing diagnostic markers, and monitoring treatment progress. Jung, with the Word Association Test, grasped these principles 100 years ago. Brain imaging techniques have contributed to powerful recent advances in our understanding of neurodevelopmental processes in the first three years of life. If adequate nurturance is compromised, a range of difficulties may emerge. This has important implications for how we understand and treat our psychotherapy clients. The paper provides an overview of functional brain imaging and advances in developmental neuropsychology, and looks at applications of some of these findings (including neurofeedback) in the Jungian psychotherapy domain. © 2017, The Society of Analytical Psychology.

  18. Technical Note: DIRART – A software suite for deformable image registration and adaptive radiotherapy research

    PubMed Central

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.

    2011-01-01

    Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods:DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing∕registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. PMID:21361176

  19. Multi-functional angiographic OFDI using frequency-multiplexed dual-beam illumination

    PubMed Central

    Kim, SunHee; Park, Taejin; Jang, Sun-Joo; Nam, Ahhyun S.; Vakoc, Benjamin J.; Oh, Wang-Yuhl

    2015-01-01

    Detection of blood flow inside the tissue sample can be achieved by measuring the local change of complex signal over time in angiographic optical coherence tomography (OCT). In conventional angiographic OCT, the transverse displacement of the imaging beam during the time interval between a pair of OCT signal measurements must be significantly reduced to minimize the noise due to the beam scanning-induced phase decorrelation at the expense of the imaging speed. Recent introduction of dual-beam scan method either using polarization encoding or two identical imaging systems in spectral-domain (SD) OCT scheme shows potential for high-sensitivity vasculature imaging without suffering from spurious phase noise caused by the beam scanning-induced spatial decorrelation. In this paper, we present multi-functional angiographic optical frequency domain imaging (OFDI) using frequency-multiplexed dual-beam illumination. This frequency multiplexing scheme, utilizing unique features of OFDI, provides spatially separated dual imaging beams occupying distinct electrical frequency bands that can be demultiplexed in the frequency domain processing. We demonstrate the 3D multi-functional imaging of the normal mouse skin in the dorsal skin fold chamber visualizing distinct layer structures from the intensity imaging, information about mechanical integrity from the polarization-sensitive imaging, and depth-resolved microvasculature from the angiographic imaging that are simultaneously acquired and automatically co-registered. PMID:25968731

  20. The design of wavefront coded imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Shun; Cen, Zhaofeng; Li, Xiaotong

    2016-10-01

    Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.

  1. Photonics and bioinspiration

    NASA Astrophysics Data System (ADS)

    Lewis, Keith

    2014-10-01

    Biological systems exploiting light have benefitted from thousands of years of genetic evolution and can provide insight to support the development of new approaches for imaging, image processing and communication. For example, biological vision systems can provide significant diversity, yet are able to function with only a minimal degree of neural processing. Examples will be described underlying the processes used to support the development of new concepts for photonic systems, ranging from uncooled bolometers and tunable filters, to asymmetric free-space optical communication systems and new forms of camera capable of simultaneously providing spectral and polarimetric diversity.

  2. Functional Heterogeneity and Convergence in the Right Temporoparietal Junction

    PubMed Central

    Lee, Su Mei; McCarthy, Gregory

    2016-01-01

    The right temporoparietal junction (rTPJ) is engaged by tasks that manipulate biological motion processing, Theory of Mind attributions, and attention reorienting. The proximity of activations elicited by these tasks raises the question of whether these tasks share common cognitive component processes that are subserved by common neural substrates. Here, we used high-resolution whole-brain functional magnetic resonance imaging in a within-subjects design to determine whether these tasks activate common regions of the rTPJ. Each participant was presented with the 3 tasks in the same imaging session. In a whole-brain analysis, we found that only the right and left TPJs were activated by all 3 tasks. Multivoxel pattern analysis revealed that the regions of overlap could still discriminate the 3 tasks. Notably, we found significant cross-task classification in the right TPJ, which suggests a shared neural process between the 3 tasks. Taken together, these results support prior studies that have indicated functional heterogeneity within the rTPJ but also suggest a convergence of function within a region of overlap. These results also call for further investigation into the nature of the function subserved in this overlap region. PMID:25477367

  3. An automated multi-scale network-based scheme for detection and location of seismic sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.

    2017-12-01

    We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.

  4. A single dual-stream framework for syntactic computations in music and language.

    PubMed

    Musso, Mariacristina; Weiller, Cornelius; Horn, Andreas; Glauche, Volkmer; Umarova, Roza; Hennig, Jürgen; Schneider, Albrecht; Rijntjes, Michel

    2015-08-15

    This study is the first to compare in the same subjects the specific spatial distribution and the functional and anatomical connectivity of the neuronal resources that activate and integrate syntactic representations during music and language processing. Combining functional magnetic resonance imaging with functional connectivity and diffusion tensor imaging-based probabilistic tractography, we examined the brain network involved in the recognition and integration of words and chords that were not hierarchically related to the preceding syntax; that is, those deviating from the universal principles of grammar and tonal relatedness. This kind of syntactic processing in both domains was found to rely on a shared network in the left hemisphere centered on the inferior part of the inferior frontal gyrus (IFG), including pars opercularis and pars triangularis, and on dorsal and ventral long association tracts connecting this brain area with temporo-parietal regions. Language processing utilized some adjacent left hemispheric IFG and middle temporal regions more than music processing, and music processing also involved right hemisphere regions not activated in language processing. Our data indicate that a dual-stream system with dorsal and ventral long association tracts centered on a functionally and structurally highly differentiated left IFG is pivotal for domain-general syntactic competence over a broad range of elements including words and chords. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Good, J. C.

    2017-05-01

    The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. This code is freely available and has been widely used in the astronomy and IT communities for research, product generation, and for developing next-generation cyber-infrastructure. Recently, it has begun finding applicability in the field of visualization. This development has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. The toolkit it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics, but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and downsampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials allow readers to reproduce and extend all the visualizations presented in this paper.

  6. Variable disruption of a syntactic processing network in primary progressive aphasia.

    PubMed

    Wilson, Stephen M; DeMarco, Andrew T; Henry, Maya L; Gesierich, Benno; Babiak, Miranda; Miller, Bruce L; Gorno-Tempini, Maria Luisa

    2016-11-01

    Syntactic processing deficits are highly variable in individuals with primary progressive aphasia. Damage to left inferior frontal cortex has been associated with syntactic deficits in primary progressive aphasia in a number of structural and functional neuroimaging studies. However, a contrasting picture of a broader syntactic network has emerged from neuropsychological studies in other aphasic cohorts, and functional imaging studies in healthy controls. To reconcile these findings, we used functional magnetic resonance imaging to investigate the functional neuroanatomy of syntactic comprehension in 51 individuals with primary progressive aphasia, composed of all clinical variants and a range of degrees of syntactic processing impairment. We used trial-by-trial reaction time as a proxy for syntactic processing load, to determine which regions were modulated by syntactic processing in each patient, and how the set of regions recruited was related to whether syntactic processing was ultimately successful or unsuccessful. Relationships between functional abnormalities and patterns of cortical atrophy were also investigated. We found that the individual degree of syntactic comprehension impairment was predicted by left frontal atrophy, but also by functional disruption of a broader syntactic processing network, comprising left posterior frontal cortex, left posterior temporal cortex, and the left intraparietal sulcus and adjacent regions. These regions were modulated by syntactic processing in healthy controls and in patients with primary progressive aphasia with relatively spared syntax, but they were modulated to a lesser extent or not at all in primary progressive aphasia patients whose syntax was relatively impaired. Our findings suggest that syntactic comprehension deficits in primary progressive aphasia reflect not only structural and functional changes in left frontal cortex, but also disruption of a wider syntactic processing network. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. The Involvement of Speed-of-Processing in Story Listening in Preschool Children: A Functional and Structural Connectivity Study.

    PubMed

    Horowitz-Kraus, Tzipi; Farah, Rola; DiFrancesco, Mark; Vannest, Jennifer

    2017-02-01

    Story listening in children relies on brain regions supporting speech perception, auditory word recognition, syntax, semantics, and discourse abilities, along with the ability to attend and process information (part of executive functions). Speed-of-processing is an early-developed executive function. We used functional and structural magnetic resonance imaging (MRI) to demonstrate the relationship between story listening and speed-of-processing in preschool-age children. Eighteen participants performed story-listening tasks during MRI scans. Functional and structural connectivity analysis was performed using the speed-of-processing scores as regressors. Activation in the superior frontal gyrus during story listening positively correlated with speed-of-processing scores. This region was functionally connected with the superior temporal gyrus, insula, and hippocampus. Fractional anisotropy in the inferior frontooccipital fasciculus, which connects the superior frontal and temporal gyri, was positively correlated with speed-of-processing scores. Our results suggest that speed-of-processing skills in preschool-age children are reflected in functional activation and connectivity during story listening and may act as a biomarker for future academic abilities. Georg Thieme Verlag KG Stuttgart · New York.

  8. Brain regions sensitive to the face inversion effect: a functional magnetic resonance imaging study in humans.

    PubMed

    Leube, Dirk T; Yoon, Hyo Woon; Rapp, Alexander; Erb, Michael; Grodd, Wolfgang; Bartels, Mathias; Kircher, Tilo T J

    2003-05-22

    Perception of upright faces relies on configural processing. Therefore recognition of inverted, compared to upright faces is impaired. In a functional magnetic resonance imaging experiment we investigated the neural correlate of a face inversion task. Thirteen healthy subjects were presented with a equal number of upright and inverted faces alternating with a low level baseline with an upright and inverted picture of an abstract symbol. Brain activation was calculated for upright minus inverted faces. For this differential contrast, we found a signal change in the right superior temporal sulcus and right insula. Configural properties are processed in a network comprising right superior temporal and insular cortex.

  9. Evaluation of the image quality of telescopes using the star test

    NASA Astrophysics Data System (ADS)

    Vazquez y Monteil, Sergio; Salazar Romero, Marcos A.; Gale, David M.

    2004-10-01

    The Point Spread Function (PSF) or star test is one of the main criteria to be considered in the quality of the image formed by a telescope. In a real system the distribution of irradiance in the image of a point source is given by the PSF, a function which is highly sensitive to aberrations. The PSF of a telescope may be determined by measuring the intensity distribution in the image of a star. Alternatively, if we already know the aberrations present in the optical system, then we may use diffraction theory to calculate the function. In this paper we propose a method for determining the wavefront aberrations from the PSF, using Genetic Algorithms to perform an optimization process starting from the PSF instead of the more traditional method of adjusting an aberration polynomial. We show that this method of phase recuperation is immune to noise-induced errors arising during image aquisition and registration. Some practical results are shown.

  10. Lunar-edge based on-orbit modulation transfer function (MTF) measurement

    NASA Astrophysics Data System (ADS)

    Cheng, Ying; Yi, Hongwei; Liu, Xinlong

    2017-10-01

    Modulation transfer function (MTF) is an important parameter for image quality evaluation of on-orbit optical image systems. Various methods have been proposed to determine the MTF of an imaging system which are based on images containing point, pulse and edge features. In this paper, the edge of the moon can be used as a high contrast target to measure on-orbit MTF of image systems based on knife-edge methods. The proposed method is an extension of the ISO 12233 Slanted-edge Spatial Frequency Response test, except that the shape of the edge is a circular arc instead of a straight line. In order to get more accurate edge locations and then obtain a more authentic edge spread function (ESF), we choose circular fitting method based on least square to fit lunar edge in sub-pixel edge detection process. At last, simulation results show that the MTF value at Nyquist frequency calculated using our lunar edge method is reliable and accurate with error less than 2% comparing with theoretical MTF value.

  11. NASA End-to-End Data System /NEEDS/ information adaptive system - Performing image processing onboard the spacecraft

    NASA Technical Reports Server (NTRS)

    Kelly, W. L.; Howle, W. M.; Meredith, B. D.

    1980-01-01

    The Information Adaptive System (IAS) is an element of the NASA End-to-End Data System (NEEDS) Phase II and is focused toward onbaord image processing. Since the IAS is a data preprocessing system which is closely coupled to the sensor system, it serves as a first step in providing a 'Smart' imaging sensor. Some of the functions planned for the IAS include sensor response nonuniformity correction, geometric correction, data set selection, data formatting, packetization, and adaptive system control. The inclusion of these sensor data preprocessing functions onboard the spacecraft will significantly improve the extraction of information from the sensor data in a timely and cost effective manner and provide the opportunity to design sensor systems which can be reconfigured in near real time for optimum performance. The purpose of this paper is to present the preliminary design of the IAS and the plans for its development.

  12. Functional imaging of conditioned aversive emotional responses in antisocial personality disorder.

    PubMed

    Schneider, F; Habel, U; Kessler, C; Posse, S; Grodd, W; Müller-Gärtner, H W

    2000-01-01

    Individuals with antisocial personality disorder (n = 12) and healthy controls (n = 12) were examined for cerebral regional activation involved in the processing of negative affect. A differential aversive classical conditioning paradigm was applied with odors as unconditioned stimuli and faces as conditioned stimuli. Functional magnetic resonance imaging (fMRI) based on echo-planar imaging was used while cerebral activity was studied during habituation, acquisition, and extinction. Individually defined cerebral regions were analyzed. Both groups indicated behavioral conditioning following subjective ratings of emotional valence to conditioned stimuli. Differential effects were found during acquisition in the amygdala and dorsolateral prefrontal cortex. Controls showed signal decreases, patients signal increases. These preliminary results revealed unexpected signal increases in cortical/subcortical areas of patients. The increases may result from an additional effort put in by these individuals to form negative emotional associations, a pattern of processing that may correspond to their characteristic deviant emotional behavior. Copyright 2000 S. Karger AG, Basel.

  13. Cardiovascular Imaging and Image Processing: Theory and Practice - 1975

    NASA Technical Reports Server (NTRS)

    Harrison, Donald C. (Editor); Sandler, Harold (Editor); Miller, Harry A. (Editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

    1975-01-01

    Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

  14. Method of simulation and visualization of FDG metabolism based on VHP image

    NASA Astrophysics Data System (ADS)

    Cui, Yunfeng; Bai, Jing

    2005-04-01

    FDG ([18F] 2-fluoro-2-deoxy-D-glucose) is the typical tracer used in clinical PET (positron emission tomography) studies. The FDG-PET is an important imaging tool for early diagnosis and treatment of malignant tumor and functional disease. The main purpose of this work is to propose a method that represents FDG metabolism in human body through the simulation and visualization of 18F distribution process dynamically based on the segmented VHP (Visible Human Project) image dataset. First, the plasma time-activity curve (PTAC) and the tissues time-activity curves (TTAC) are obtained from the previous studies and the literatures. According to the obtained PTAC and TTACs, a set of corresponding values are assigned to the segmented VHP image, Thus a set of dynamic images are derived to show the 18F distribution in the concerned tissues for the predetermined sampling schedule. Finally, the simulated FDG distribution images are visualized in 3D and 2D formats, respectively, incorporated with principal interaction functions. As compared with original PET image, our visualization result presents higher resolution because of the high resolution of VHP image data, and show the distribution process of 18F dynamically. The results of our work can be used in education and related research as well as a tool for the PET operator to design their PET experiment program.

  15. Quantum image median filtering in the spatial domain

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Liu, Xiande; Xiao, Hong

    2018-03-01

    Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.

  16. Low Cost Embedded Stereo System for Underwater Surveys

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.

    2017-11-01

    This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.

  17. Reliable clarity automatic-evaluation method for optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  18. Leading Marines in a Digital World

    DTIC Science & Technology

    2013-03-01

    23  2.  Empathy ..............................................................................................25  3.  Healing...Direction Center fMRI Functional Magnetic Resonance Imaging LMX Leader-Member Exchange MCPP Marine Corps Planning Process MRI Magnetic Resonance...Imaging NCO Non Commissioned Officer OCS Officer Candidate School PTSD Post-traumatic Stress Disorder U.S. United States Wyo Wyoming x

  19. Adaptive recovery of motion blur point spread function from differently exposed images

    NASA Astrophysics Data System (ADS)

    Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian

    2010-01-01

    Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.

  20. Large-scale topology and the default mode network in the mouse connectome

    PubMed Central

    Stafford, James M.; Jarrett, Benjamin R.; Miranda-Dominguez, Oscar; Mills, Brian D.; Cain, Nicholas; Mihalas, Stefan; Lahvis, Garet P.; Lattal, K. Matthew; Mitchell, Suzanne H.; David, Stephen V.; Fryer, John D.; Nigg, Joel T.; Fair, Damien A.

    2014-01-01

    Noninvasive functional imaging holds great promise for serving as a translational bridge between human and animal models of various neurological and psychiatric disorders. However, despite a depth of knowledge of the cellular and molecular underpinnings of atypical processes in mouse models, little is known about the large-scale functional architecture measured by functional brain imaging, limiting translation to human conditions. Here, we provide a robust processing pipeline to generate high-resolution, whole-brain resting-state functional connectivity MRI (rs-fcMRI) images in the mouse. Using a mesoscale structural connectome (i.e., an anterograde tracer mapping of axonal projections across the mouse CNS), we show that rs-fcMRI in the mouse has strong structural underpinnings, validating our procedures. We next directly show that large-scale network properties previously identified in primates are present in rodents, although they differ in several ways. Last, we examine the existence of the so-called default mode network (DMN)—a distributed functional brain system identified in primates as being highly important for social cognition and overall brain function and atypically functionally connected across a multitude of disorders. We show the presence of a potential DMN in the mouse brain both structurally and functionally. Together, these studies confirm the presence of basic network properties and functional networks of high translational importance in structural and functional systems in the mouse brain. This work clears the way for an important bridge measurement between human and rodent models, enabling us to make stronger conclusions about how regionally specific cellular and molecular manipulations in mice relate back to humans. PMID:25512496

  1. PSF estimation for defocus blurred image based on quantum back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang

    2010-11-01

    Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.

  2. Modeling of biologically motivated self-learning equivalent-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for image fragments clustering and recognition

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2018-03-01

    The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference array) and fragments with diferent dimensions for clustering is carried out. The experiments, using the software environment Mathcad showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. The experimental results show that such models can be successfully used for auto- and hetero-associative recognition. Also they can be used to explain some mechanisms, known as "the reinforcementinhibition concept". Also we demonstrate a real model experiments, which confirm that the nonlinear processing by equivalent function allow to determine the neuron-winners and customize the weight matrix. At the end of the report, we will show how to use the obtained results and to propose new more efficient hardware architecture of SL_EC_RMNS based on matrix-tensor multipliers. Also we estimate the parameters and performance of such architectures.

  3. Functional brain response to food images in successful adolescent weight losers compared with normal-weight and overweight controls.

    PubMed

    Jensen, Chad D; Kirwan, C Brock

    2015-03-01

    Research conducted with adults suggests that successful weight losers demonstrate greater activation in brain regions associated with executive control in response to viewing high-energy foods. No previous studies have examined these associations in adolescents. Functional neuroimaging was used to assess brain response to food images among groups of overweight (OW), normal-weight (NW), and successful weight-losing (SWL) adolescents. Eleven SWL, 12 NW, and 11 OW participants underwent functional magnetic resonance imaging while viewing images of high- and low-energy foods. When viewing high-energy food images, SWLs demonstrated greater activation in the dorsolateral prefrontal cortex (DLPFC) compared with OW and NW controls. Compared with NW and SWL groups, OW individuals demonstrated greater activation in the ventral striatum and anterior cingulate in response to food images. Adolescent SWLs demonstrated greater neural activation in the DLPFC compared with OW/NW controls when viewing high-energy food stimuli, which may indicate enhanced executive control. OW individuals' brain responses to food stimuli may indicate greater reward incentive processes than either SWL or NW groups. © 2015 The Obesity Society.

  4. Optical imaging of architecture and function in the living brain sheds new light on cortical mechanisms underlying visual perception.

    PubMed

    Grinvald, A

    1992-01-01

    Long standing questions related to brain mechanisms underlying perception can finally be resolved by direct visualization of the architecture and function of mammalian cortex. This advance has been accomplished with the aid of two optical imaging techniques with which one can literally see how the brain functions. The upbringing of this technology required a multi-disciplinary approach integrating brain research with organic chemistry, spectroscopy, biophysics, computer sciences, optics and image processing. Beyond the technological ramifications, recent research shed new light on cortical mechanisms underlying sensory perception. Clinical applications of this technology for precise mapping of the cortical surface of patients during neurosurgery have begun. Below is a brief summary of our own research and a description of the technical specifications of the two optical imaging techniques. Like every technique, optical imaging also suffers from severe limitations. Here we mostly emphasize some of its advantages relative to all alternative imaging techniques currently in use. The limitations are critically discussed in our recent reviews. For a series of other reviews, see Cohen (1989).

  5. False Memories for Shape Activate the Lateral Occipital Complex

    ERIC Educational Resources Information Center

    Karanian, Jessica M.; Slotnick, Scott D.

    2017-01-01

    Previous functional magnetic resonance imaging evidence has shown that false memories arise from higher-level conscious processing regions rather than lower-level sensory processing regions. In the present study, we assessed whether the lateral occipital complex (LOC)--a lower-level conscious shape processing region--was associated with false…

  6. The Role of Right and Left Parietal Lobes in the Conceptual Processing of Numbers

    ERIC Educational Resources Information Center

    Cappelletti, Marinella; Lee, Hwee Ling; Freeman, Elliot D.; Price, Cathy J.

    2010-01-01

    Neuropsychological and functional imaging studies have associated the conceptual processing of numbers with bilateral parietal regions (including intraparietal sulcus). However, the processes driving these effects remain unclear because both left and right posterior parietal regions are activated by many other conceptual, perceptual, attention,…

  7. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  8. Contour-Driven Atlas-Based Segmentation

    PubMed Central

    Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina

    2016-01-01

    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202

  9. Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses

    NASA Astrophysics Data System (ADS)

    Wong, Stephen T. C.; Knowlton, Robert C.; Hoo, Kent S.; Huang, H. K.

    1995-05-01

    Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the brain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstation to aid the noninvasive presurgical evaluation of epilepsy patients. These techniques include online access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitation of structural and functional information contained in the registered images. For illustration, we describe the use of these techniques in a patient case of nonlesional neocortical epilepsy. We also present out future work based on preliminary studies.

  10. Application of Image Analysis for Characterization of Spatial Arrangements of Features in Microstructure

    NASA Technical Reports Server (NTRS)

    Louis, Pascal; Gokhale, Arun M.

    1995-01-01

    A number of microstructural processes are sensitive to the spatial arrangements of features in microstructure. However, very little attention has been given in the past to the experimental measurements of the descriptors of microstructural distance distributions due to the lack of practically feasible methods. We present a digital image analysis procedure to estimate the micro-structural distance distributions. The application of the technique is demonstrated via estimation of K function, radial distribution function, and nearest-neighbor distribution function of hollow spherical carbon particulates in a polymer matrix composite, observed in a metallographic section.

  11. Technologies for imaging neural activity in large volumes

    PubMed Central

    Ji, Na; Freeman, Jeremy; Smith, Spencer L.

    2017-01-01

    Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194

  12. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  13. [Research on non-rigid registration of multi-modal medical image based on Demons algorithm].

    PubMed

    Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang

    2014-02-01

    Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.

  14. Wigner analysis of three dimensional pupil with finite lateral aperture

    PubMed Central

    Chen, Hsi-Hsun; Oh, Se Baek; Zhai, Xiaomin; Tsai, Jui-Chang; Cao, Liang-Cai; Barbastathis, George; Luo, Yuan

    2015-01-01

    A three dimensional (3D) pupil is an optical element, most commonly implemented on a volume hologram, that processes the incident optical field on a 3D fashion. Here we analyze the diffraction properties of a 3D pupil with finite lateral aperture in the 4-f imaging system configuration, using the Wigner Distribution Function (WDF) formulation. Since 3D imaging pupil is finite in both lateral and longitudinal directions, the WDF of the volume holographic 4-f imager theoretically predicts distinct Bragg diffraction patterns in phase space. These result in asymmetric profiles of diffracted coherent point spread function between degenerate diffraction and Bragg diffraction, elucidating the fundamental performance of volume holographic imaging. Experimental measurements are also presented, confirming the theoretical predictions. PMID:25836443

  15. TU-FG-209-11: Validation of a Channelized Hotelling Observer to Optimize Chest Radiography Image Processing for Nodule Detection: A Human Observer Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, A; Little, K; Chung, J

    Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less

  16. Realization of a single image haze removal system based on DaVinci DM6467T processor

    NASA Astrophysics Data System (ADS)

    Liu, Zhuang

    2014-10-01

    Video monitoring system (VMS) has been extensively applied in domains of target recognition, traffic management, remote sensing, auto navigation and national defence. However the VMS has a strong dependence on the weather, for instance, in foggy weather, the quality of images received by the VMS are distinct degraded and the effective range of VMS is also decreased. All in all, the VMS performs terribly in bad weather. Thus the research of fog degraded images enhancement has very high theoretical and practical application value. A design scheme of a fog degraded images enhancement system based on the TI DaVinci processor is presented in this paper. The main function of the referred system is to extract and digital cameras capture images and execute image enhancement processing to obtain a clear image. The processor used in this system is the dual core TI DaVinci DM6467T - ARM@500MHz+DSP@1GH. A MontaVista Linux operating system is running on the ARM subsystem which handles I/O and application processing. The DSP handles signal processing and the results are available to the ARM subsystem in shared memory.The system benefits from the DaVinci processor so that, with lower power cost and smaller volume, it provides the equivalent image processing capability of a X86 computer. The outcome shows that the system in this paper can process images at 25 frames per second on D1 resolution.

  17. Functional organization of the face-sensitive areas in human occipital-temporal cortex.

    PubMed

    Shao, Hanyu; Weng, Xuchu; He, Sheng

    2017-08-15

    Human occipital-temporal cortex features several areas sensitive to faces, presumably forming the biological substrate for face perception. To date, there are piecemeal insights regarding the functional organization of these regions. They have come, however, from studies that are far from homogeneous with regard to the regions involved, the experimental design, and the data analysis approach. In order to provide an overall view of the functional organization of the face-sensitive areas, it is necessary to conduct a comprehensive study that taps into the pivotal functional properties of all the face-sensitive areas, within the context of the same experimental design, and uses multiple data analysis approaches. In this study, we identified the most robustly activated face-sensitive areas in bilateral occipital-temporal cortices (i.e., AFP, aFFA, pFFA, OFA, pcSTS, pSTS) and systemically compared their regionally averaged activation and multivoxel activation patterns to 96 images from 16 object categories, including faces and non-faces. This condition-rich and single-image analysis approach critically samples the functional properties of a brain region, allowing us to test how two basic functional properties, namely face-category selectivity and face-exemplar sensitivity are distributed among these regions. Moreover, by examining the correlational structure of neural responses to the 96 images, we characterize their interactions in the greater face-processing network. We found that (1) r-pFFA showed the highest face-category selectivity, followed by l-pFFA, bilateral aFFA and OFA, and then bilateral pcSTS. In contrast, bilateral AFP and pSTS showed low face-category selectivity; (2) l-aFFA, l-pcSTS and bilateral AFP showed evidence of face-exemplar sensitivity; (3) r-OFA showed high overall response similarities with bilateral LOC and r-pFFA, suggesting it might be a transitional stage between general and face-selective information processing; (4) r-aFFA showed high face-selective response similarity with r-pFFA and r-OFA, indicating it was specifically involved in processing face information. Results also reveal two properties of these face sensitive regions across the two hemispheres: (1) the averaged left intra-hemispheric response similarity for the images was lower than the averaged right intra-hemispheric and the inter-hemispheric response similarity, implying convergence of face processing towards the right hemisphere, and (2) the response similarities between homologous regions in the two hemispheres decreased as information processing proceeded from the early, more posterior, processing stage (OFA), indicating an increasing degree of hemispheric specialization and right hemisphere bias for face information processing. This study contributes to an emerging picture of how faces are processed within the occipital and temporal cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A polymeric fastener can easily functionalize liposome surfaces with gadolinium for enhanced magnetic resonance imaging.

    PubMed

    Smith, Cartney E; Shkumatov, Artem; Withers, Sarah G; Yang, Binxia; Glockner, James F; Misra, Sanjay; Roy, Edward J; Wong, Chun-Ho; Zimmerman, Steven C; Kong, Hyunjoon

    2013-11-26

    Common methods of loading magnetic resonance imaging (MRI) contrast agents into nanoparticles often suffer from challenges related to particle formation, complex chemical modification/purification steps, and reduced contrast efficiency. This study presents a simple, yet advanced process to address these issues by loading gadolinium, an MRI contrast agent, exclusively on a liposome surface using a polymeric fastener. The fastener, so named for its ability to physically link the two functional components together, consisted of chitosan substituted with diethylenetriaminepentaacetic acid (DTPA) to chelate gadolinium, as well as octadecyl chains to stabilize the modified chitosan on the liposome surface. The assembly strategy, mimicking the mechanisms by which viruses and proteins naturally anchor to a cell, provided greater T1 relaxivity than liposomes loaded with gadolinium in both the interior and outer leaflet. Gadolinium-coated liposomes were ultimately evaluated in vivo using murine ischemia models to highlight the diagnostic capability of the system. Taken together, this process decouples particle assembly and functionalization and, therefore, has considerable potential to enhance imaging quality while alleviating many of the difficulties associated with multifunctional particle fabrication.

  19. A Polymeric Fastener can Easily Functionalize Liposome Surfaces with Gadolinium for Enhanced Magnetic Resonance Imaging

    PubMed Central

    Smith, Cartney E.; Shkumatov, Artem; Withers, Sarah G.; Glockner, James F.; Misra, Sanjay; Roy, Edward J.; Wong, Chun-Ho; Zimmerman, Steven C.; Kong, Hyunjoon

    2013-01-01

    Common methods of loading magnetic resonance imaging (MRI) contrast agents into nanoparticles often suffer from challenges related to particle formation, complex chemical modification/purification steps, and reduced contrast efficiency. This study presents a simple, yet advanced process to address these issues by loading gadolinium, an MRI contrast agent, exclusively on a liposome surface using a polymeric fastener. The fastener, so named for its ability to physically link the two functional components together, consisted of chitosan substituted with diethylenetriaminepentaacetic acid (DTPA) to chelate gadolinium, as well as octadecyl chains to stabilize the modified chitosan on the liposome surface. The assembly strategy, mimicking the mechanisms by which viruses and proteins naturally anchor to a cell, provided greater T1 relaxivity than liposomes loaded with gadolinium in both the interior and outer leaflet. Gadolinium-coated liposomes were ultimately evaluated in vivo using murine ischemia models to highlight the diagnostic capability of the system. Taken together, this process decouples particle assembly and functionalization, and therefore has considerable potential to enhance imaging quality while alleviating many of the difficulties associated with multifunctional particle fabrication. PMID:24083377

  20. Imaging Metals in Brain Tissue by Laser Ablation - Inductively Coupled Plasma - Mass Spectrometry (LA-ICP-MS)

    PubMed Central

    Hare, Dominic J.; Kysenius, Kai; Paul, Bence; Knauer, Beate; Hutchinson, Robert W.; O'Connor, Ciaran; Fryer, Fred; Hennessey, Tom P.; Bush, Ashley I.; Crouch, Peter J.; Doble, Philip A.

    2017-01-01

    Metals are found ubiquitously throughout an organism, with their biological role dictated by both their chemical reactivity and abundance within a specific anatomical region. Within the brain, metals have a highly compartmentalized distribution, depending on the primary function they play within the central nervous system. Imaging the spatial distribution of metals has provided unique insight into the biochemical architecture of the brain, allowing direct correlation between neuroanatomical regions and their known function with regard to metal-dependent processes. In addition, several age-related neurological disorders feature disrupted metal homeostasis, which is often confined to small regions of the brain that are otherwise difficult to analyze. Here, we describe a comprehensive method for quantitatively imaging metals in the mouse brain, using laser ablation - inductively coupled plasma - mass spectrometry (LA-ICP-MS) and specially designed image processing software. Focusing on iron, copper and zinc, which are three of the most abundant and disease-relevant metals within the brain, we describe the essential steps in sample preparation, analysis, quantitative measurements and image processing to produce maps of metal distribution within the low micrometer resolution range. This technique, applicable to any cut tissue section, is capable of demonstrating the highly variable distribution of metals within an organ or system, and can be used to identify changes in metal homeostasis and absolute levels within fine anatomical structures. PMID:28190025

  1. A functional Magnetic Resonance Imaging study of neurohemodynamic abnormalities during emotion processing in subjects at high risk for schizophrenia

    PubMed Central

    Venkatasubramanian, Ganesan; Puthumana, Dawn Thomas K.; Jayakumar, Peruvumba N.; Gangadhar, B. N.

    2010-01-01

    Background: Emotion processing abnormalities are considered among the core deficits in schizophrenia. Subjects at high risk (HR) for schizophrenia also show these deficits. Structural neuroimaging studies examining unaffected relatives at high risk for schizophrenia have demonstrated neuroanatomical abnormalities involving neo-cortical and sub-cortical brain regions related to emotion processing. The brain functional correlates of emotion processing in these HR subjects in the context of ecologically valid, real-life dynamic images using functional Magnetic Resonance Imaging (fMRI) has not been examined previously. Aim: To examine the neurohemodynamic abnormalities during emotion processing in unaffected subjects at high risk for schizophrenia in comparison with age-, sex-, handedness- and education-matched healthy controls, using fMRI. Materials and Methods: HR subjects for schizophrenia (n=17) and matched healthy controls (n=16) were examined. The emotion processing of fearful facial expression was examined using a culturally appropriate and valid tool for Indian subjects. The fMRI was performed in a 1.5-T scanner during an implicit emotion processing paradigm. The fMRI analyses were performed using the Statistical Parametric Mapping 2 (SPM2) software. Results: HR subjects had significantly reduced brain activations in left insula, left medial frontal gyrus, left inferior frontal gyrus, right cingulate gyrus, right precentral gyrus and right inferior parietal lobule. Hypothesis-driven region-of-interest analysis revealed hypoactivation of right amygdala in HR subjects. Conclusions: Study findings suggest that neurohemodynamic abnormalities involving limbic and frontal cortices could be potential indicators for increased vulnerability toward schizophrenia. The clinical utility of these novel findings in predicting the development of psychosis needs to be evaluated. PMID:21267363

  2. Noninvasive imaging of protein-protein interactions in living organisms.

    PubMed

    Haberkorn, Uwe; Altmann, Annette

    2003-06-01

    Genomic research is expected to generate new types of complex observational data, changing the types of experiments as well as our understanding of biological processes. The investigation and definition of relationships among proteins is essential for understanding the function of each gene and the mechanisms of biological processes that specific genes are involved in. Recently, a study by Paulmurugan et al. demonstrated a tool for in vivo noninvasive imaging of protein-protein interactions and intracellular networks.

  3. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  4. A cascade model of information processing and encoding for retinal prosthesis.

    PubMed

    Pei, Zhi-Jun; Gao, Guan-Xin; Hao, Bo; Qiao, Qing-Li; Ai, Hui-Jian

    2016-04-01

    Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of specific physiological phenomena. However, Most of these focus on stimulus image compression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectification of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional computations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps: linear spatiotemporal filtering, static nonlinear rectification, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artificial retina for the retinally blind.

  5. The Java Image Science Toolkit (JIST) for rapid prototyping and publishing of neuroimaging software.

    PubMed

    Lucas, Blake C; Bogovic, John A; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L; Pham, Dzung L; Landman, Bennett A

    2010-03-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC).

  6. The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software

    PubMed Central

    Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung

    2010-01-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162

  7. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  8. Neural mechanisms of the mind, Aristotle, Zadeh, and fMRI.

    PubMed

    Perlovsky, Leonid I

    2010-05-01

    Processes in the mind: perception, cognition, concepts, instincts, emotions, and higher cognitive abilities for abstract thinking, beautiful music are considered here within a neural modeling fields (NMFs) paradigm. Its fundamental mathematical mechanism is a process "from vague-fuzzy to crisp," called dynamic logic (DL). This paper discusses why this paradigm is necessary mathematically, and relates it to a psychological description of the mind. Surprisingly, the process from "vague to crisp" corresponds to Aristotelian understanding of mental functioning. Recent functional magnetic resonance imaging (fMRI) measurements confirmed this process in neural mechanisms of perception.

  9. Ex-vivo imaging of excised tissue using vital dyes and confocal microscopy

    PubMed Central

    Johnson, Simon; Rabinovitch, Peter

    2012-01-01

    Vital dyes routinely used for staining cultured cells can also be used to stain and image live tissue slices ex-vivo. Staining tissue with vital dyes allows researchers to collect structural and functional data simultaneously and can be used for qualitative or quantitative fluorescent image collection. The protocols presented here are useful for structural and functional analysis of viable properties of cells in intact tissue slices, allowing for the collection of data in a structurally relevant environment. With these protocols, vital dyes can be applied as a research tool to disease processes and properties of tissue not amenable to cell culture based studies. PMID:22752953

  10. MEMS scanning micromirror for optical coherence tomography.

    PubMed

    Strathman, Matthew; Liu, Yunbo; Keeler, Ethan G; Song, Mingli; Baran, Utku; Xi, Jiefeng; Sun, Ming-Ting; Wang, Ruikang; Li, Xingde; Lin, Lih Y

    2015-01-01

    This paper describes an endoscopic-inspired imaging system employing a micro-electromechanical system (MEMS) micromirror scanner to achieve beam scanning for optical coherence tomography (OCT) imaging. Miniaturization of a scanning mirror using MEMS technology can allow a fully functional imaging probe to be contained in a package sufficiently small for utilization in a working channel of a standard gastroesophageal endoscope. This work employs advanced image processing techniques to enhance the images acquired using the MEMS scanner to correct non-idealities in mirror performance. The experimental results demonstrate the effectiveness of the proposed technique.

  11. MEMS scanning micromirror for optical coherence tomography

    PubMed Central

    Strathman, Matthew; Liu, Yunbo; Keeler, Ethan G.; Song, Mingli; Baran, Utku; Xi, Jiefeng; Sun, Ming-Ting; Wang, Ruikang; Li, Xingde; Lin, Lih Y.

    2014-01-01

    This paper describes an endoscopic-inspired imaging system employing a micro-electromechanical system (MEMS) micromirror scanner to achieve beam scanning for optical coherence tomography (OCT) imaging. Miniaturization of a scanning mirror using MEMS technology can allow a fully functional imaging probe to be contained in a package sufficiently small for utilization in a working channel of a standard gastroesophageal endoscope. This work employs advanced image processing techniques to enhance the images acquired using the MEMS scanner to correct non-idealities in mirror performance. The experimental results demonstrate the effectiveness of the proposed technique. PMID:25657887

  12. Optical design and testing: introduction.

    PubMed

    Liang, Chao-Wen; Koshel, John; Sasian, Jose; Breault, Robert; Wang, Yongtian; Fang, Yi Chin

    2014-10-10

    Optical design and testing has numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging or nonimage optical system may require the integration of optics, mechatronics, lighting technology, optimization, ray tracing, aberration analysis, image processing, tolerance compensation, and display rendering. This issue features original research ranging from the optical design of image and nonimage optical stimuli for human perception, optics applications, bio-optics applications, 3D display, solar energy system, opto-mechatronics to novel imaging or nonimage modalities in visible and infrared spectral imaging, modulation transfer function measurement, and innovative interferometry.

  13. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  14. Fetal functional imaging portrays heterogeneous development of emerging human brain networks

    PubMed Central

    Jakab, András; Schwartz, Ernst; Kasprian, Gregor; Gruber, Gerlinde M.; Prayer, Daniela; Schöpf, Veronika; Langs, Georg

    2014-01-01

    The functional connectivity architecture of the adult human brain enables complex cognitive processes, and exhibits a remarkably complex structure shared across individuals. We are only beginning to understand its heterogeneous structure, ranging from a strongly hierarchical organization in sensorimotor areas to widely distributed networks in areas such as the parieto-frontal cortex. Our study relied on the functional magnetic resonance imaging (fMRI) data of 32 fetuses with no detectable morphological abnormalities. After adapting functional magnetic resonance acquisition, motion correction, and nuisance signal reduction procedures of resting-state functional data analysis to fetuses, we extracted neural activity information for major cortical and subcortical structures. Resting fMRI networks were observed for increasing regional functional connectivity from 21st to 38th gestational weeks (GWs) with a network-based statistical inference approach. The overall connectivity network, short range, and interhemispheric connections showed sigmoid expansion curve peaking at the 26–29 GW. In contrast, long-range connections exhibited linear increase with no periods of peaking development. Region-specific increase of functional signal synchrony followed a sequence of occipital (peak: 24.8 GW), temporal (peak: 26 GW), frontal (peak: 26.4 GW), and parietal expansion (peak: 27.5 GW). We successfully adapted functional neuroimaging and image post-processing approaches to correlate macroscopical scale activations in the fetal brain with gestational age. This in vivo study reflects the fact that the mid-fetal period hosts events that cause the architecture of the brain circuitry to mature, which presumably manifests in increasing strength of intra- and interhemispheric functional macro connectivity. PMID:25374531

  15. Fetal functional imaging portrays heterogeneous development of emerging human brain networks.

    PubMed

    Jakab, András; Schwartz, Ernst; Kasprian, Gregor; Gruber, Gerlinde M; Prayer, Daniela; Schöpf, Veronika; Langs, Georg

    2014-01-01

    The functional connectivity architecture of the adult human brain enables complex cognitive processes, and exhibits a remarkably complex structure shared across individuals. We are only beginning to understand its heterogeneous structure, ranging from a strongly hierarchical organization in sensorimotor areas to widely distributed networks in areas such as the parieto-frontal cortex. Our study relied on the functional magnetic resonance imaging (fMRI) data of 32 fetuses with no detectable morphological abnormalities. After adapting functional magnetic resonance acquisition, motion correction, and nuisance signal reduction procedures of resting-state functional data analysis to fetuses, we extracted neural activity information for major cortical and subcortical structures. Resting fMRI networks were observed for increasing regional functional connectivity from 21st to 38th gestational weeks (GWs) with a network-based statistical inference approach. The overall connectivity network, short range, and interhemispheric connections showed sigmoid expansion curve peaking at the 26-29 GW. In contrast, long-range connections exhibited linear increase with no periods of peaking development. Region-specific increase of functional signal synchrony followed a sequence of occipital (peak: 24.8 GW), temporal (peak: 26 GW), frontal (peak: 26.4 GW), and parietal expansion (peak: 27.5 GW). We successfully adapted functional neuroimaging and image post-processing approaches to correlate macroscopical scale activations in the fetal brain with gestational age. This in vivo study reflects the fact that the mid-fetal period hosts events that cause the architecture of the brain circuitry to mature, which presumably manifests in increasing strength of intra- and interhemispheric functional macro connectivity.

  16. Altered amygdala-prefrontal connectivity during emotion perception in schizophrenia.

    PubMed

    Bjorkquist, Olivia A; Olsen, Emily K; Nelson, Brady D; Herbener, Ellen S

    2016-08-01

    Individuals with schizophrenia evidence impaired emotional functioning. Abnormal amygdala activity has been identified as an etiological factor underlying affective impairment in this population, but the exact nature remains unclear. The current study utilized psychophysiological interaction analyses to examine functional connectivity between the amygdala and medial prefrontal cortex (mPFC) during an emotion perception task. Participants with schizophrenia (SZ) and healthy controls (HC) viewed and rated positive, negative, and neutral images while undergoing functional neuroimaging. Results revealed a significant group difference in right amygdala-mPFC connectivity during perception of negative versus neutral images. Specifically, HC participants demonstrated positive functional coupling between the amygdala and mPFC, consistent with co-active processing of salient information. In contrast, SZ participants evidenced negative functional coupling, consistent with top-down inhibition of the amygdala by the mPFC. A significant positive correlation between connectivity strength during negative image perception and clinician-rated social functioning was also observed in SZ participants, such that weaker right amygdala-mPFC coupling during negative compared to neutral image perception was associated with poorer social functioning. Overall, results suggest that emotional dysfunction and associated deficits in functional outcome in schizophrenia may relate to abnormal interactions between the amygdala and mPFC during perception of emotional stimuli. This study adds to the growing literature on abnormal functional connections in schizophrenia and supports the functional disconnection hypothesis of schizophrenia. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Initial evaluation of discrete orthogonal basis reconstruction of ECT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E.B.; Donohue, K.D.

    1996-12-31

    Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less

  18. EMAN2: an extensible image processing suite for electron microscopy.

    PubMed

    Tang, Guang; Peng, Liwei; Baldwin, Philip R; Mann, Deepinder S; Jiang, Wen; Rees, Ian; Ludtke, Steven J

    2007-01-01

    EMAN is a scientific image processing package with a particular focus on single particle reconstruction from transmission electron microscopy (TEM) images. It was first released in 1999, and new versions have been released typically 2-3 times each year since that time. EMAN2 has been under development for the last two years, with a completely refactored image processing library, and a wide range of features to make it much more flexible and extensible than EMAN1. The user-level programs are better documented, more straightforward to use, and written in the Python scripting language, so advanced users can modify the programs' behavior without any recompilation. A completely rewritten 3D transformation class simplifies translation between Euler angle standards and symmetry conventions. The core C++ library has over 500 functions for image processing and associated tasks, and it is modular with introspection capabilities, so programmers can add new algorithms with minimal effort and programs can incorporate new capabilities automatically. Finally, a flexible new parallelism system has been designed to address the shortcomings in the rigid system in EMAN1.

  19. Implementation of an RBF neural network on embedded systems: real-time face tracking and identity verification.

    PubMed

    Yang, Fan; Paindavoine, M

    2003-01-01

    This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.

  20. Preservation of visual cortical function following retinal pigment epithelium transplantation in the RCS rat using optical imaging techniques.

    PubMed

    Gias, Carlos; Jones, Myles; Keegan, David; Adamson, Peter; Greenwood, John; Lund, Ray; Martindale, John; Johnston, David; Berwick, Jason; Mayhew, John; Coffey, Peter

    2007-04-01

    The aim of this study was to determine the extent of cortical functional preservation following retinal pigment epithelium (RPE) transplantation in the Royal College of Surgeons (RCS) rat using single-wavelength optical imaging and spectroscopy. The cortical responses to visual stimulation in transplanted rats at 6 months post-transplantation were compared with those from age-matched untreated dystrophic and non-dystrophic rats. Our results show that cortical responses were evoked in non-dystrophic rats to both luminance changes and pattern stimulation, whereas no response was found in untreated dystrophic animals to any of the visual stimuli tested. In contrast, a cortical response was elicited in most of the transplanted rats to luminance changes and in many of those a response was also evoked to pattern stimulation. Although the transplanted rats did not respond to high spatial frequency information we found evidence of preservation in the cortical processing of luminance changes and low spatial frequency stimulation. Anatomical sections of transplanted rat retinas confirmed the capacity of RPE transplantation to rescue photoreceptors. Good correlation was found between photoreceptor survival and the extent of cortical function preservation determined with optical imaging techniques. This study determined the efficacy of RPE transplantation to preserve visual cortical processing and established optical imaging as a powerful technique for its assessment.

Top