Implementing An Image Understanding System Architecture Using Pipe
NASA Astrophysics Data System (ADS)
Luck, Randall L.
1988-03-01
This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.
A robust color image fusion for low light level and infrared images
NASA Astrophysics Data System (ADS)
Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang
2016-09-01
The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
NASA Technical Reports Server (NTRS)
Nashman, Marilyn; Chaconas, Karen J.
1988-01-01
The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoman; Yu, Biying; Weng, Cuncheng; Li, Hui
2014-11-01
The 632nm wavelength low intensity He-Ne laser was used to irradiated on 15 mice which had skin wound. The dynamic changes and wound healing processes were observed with nonlinear spectral imaging technology. We observed that:(1)The wound healing process was accelerated by the low-level laser therapy(LLLT);(2)The new tissues produced second harmonic generation (SHG) signals. Collagen content and microstructure differed dramatically at different time pointed along the wound healing. Our observation shows that the low intensity He-Ne laser irradiation can accelerate the healing process of skin wound in mice, and SHG imaging technique can be used to observe wound healing process, which is useful for quantitative characterization of wound status during wound healing process.
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
Knowledge-based low-level image analysis for computer vision systems
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.
1988-01-01
Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.
Image Processing Using a Parallel Architecture.
1987-12-01
ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than
NASA Astrophysics Data System (ADS)
Wade, Alex Robert; Fitzke, Frederick W.
1998-08-01
We describe an image processing system which we have developed to align autofluorescence and high-magnification images taken with a laser scanning ophthalmoscope. The low signal to noise ratio of these images makes pattern recognition a non-trivial task. However, once n images are aligned and averaged, the noise levels drop by a factor of n and the image quality is improved. We include examples of autofluorescence images and images of the cone photoreceptor mosaic obtained using this system.
The Radon cumulative distribution transform and its application to image classification
Kolouri, Soheil; Park, Se Rim; Rohde, Gustavo K.
2016-01-01
Invertible image representation methods (transforms) are routinely employed as low-level image processing operations based on which feature extraction and recognition algorithms are developed. Most transforms in current use (e.g. Fourier, Wavelet, etc.) are linear transforms, and, by themselves, are unable to substantially simplify the representation of image classes for classification. Here we describe a nonlinear, invertible, low-level image processing transform based on combining the well known Radon transform for image data, and the 1D Cumulative Distribution Transform proposed earlier. We describe a few of the properties of this new transform, and with both theoretical and experimental results show that it can often render certain problems linearly separable in transform space. PMID:26685245
Acquiring skill at medical image inspection: learning localized in early visual processes
NASA Astrophysics Data System (ADS)
Sowden, Paul T.; Davies, Ian R. L.; Roling, Penny; Watt, Simon J.
1997-04-01
Acquisition of the skill of medical image inspection could be due to changes in visual search processes, 'low-level' sensory learning, and higher level 'conceptual learning.' Here, we report two studies that investigate the extent to which learning in medical image inspection involves low- level learning. Early in the visual processing pathway cells are selective for direction of luminance contrast. We exploit this in the present studies by using transfer across direction of contrast as a 'marker' to indicate the level of processing at which learning occurs. In both studies twelve observers trained for four days at detecting features in x- ray images (experiment one equals discs in the Nijmegen phantom, experiment two equals micro-calcification clusters in digitized mammograms). Half the observers examined negative luminance contrast versions of the images and the remainder examined positive contrast versions. On the fifth day, observers swapped to inspect their respective opposite contrast images. In both experiments leaning occurred across sessions. In experiment one, learning did not transfer across direction of luminance contrast, while in experiment two there was only partial transfer. These findings are consistent with the contention that some of the leaning was localized early in the visual processing pathway. The implications of these results for current medical image inspection training schedules are discussed.
Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency
Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi
2013-01-01
Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687
Effective low-level processing for interferometric image enhancement
NASA Astrophysics Data System (ADS)
Joo, Wonjong; Cha, Soyoung S.
1995-09-01
The hybrid operation of digital image processing and a knowledge-based AI system has been recognized as a desirable approach of the automated evaluation of noise-ridden interferogram. Early noise/data reduction before phase is extracted is essential for the success of the knowledge- based processing. In this paper, new concepts of effective, interactive low-level processing operators: that is, a background-matched filter and a directional-smoothing filter, are developed and tested with transonic aerodynamic interferograms. The results indicate that these new operators have promising advantages in noise/data reduction over the conventional ones, leading success of the high-level, intelligent phase extraction.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
NASA Astrophysics Data System (ADS)
Liu, Tao; Zhang, Wei; Yan, Shaoze
2015-10-01
In this paper, a multi-scale image enhancement algorithm based on low-passing filtering and nonlinear transformation is proposed for infrared testing image of the de-bonding defect in solid propellant rocket motors. Infrared testing images with high-level noise and low contrast are foundations for identifying defects and calculating the defects size. In order to improve quality of the infrared image, according to distribution properties of the detection image, within framework of stationary wavelet transform, the approximation coefficients at suitable decomposition level is processed by index low-passing filtering by using Fourier transform, after that, the nonlinear transformation is applied to further process the figure to improve the picture contrast. To verify validity of the algorithm, the image enhancement algorithm is applied to infrared testing pictures of two specimens with de-bonding defect. Therein, one specimen is made of a type of high-strength steel, and the other is a type of carbon fiber composite. As the result shown, in the images processed by the image enhancement algorithm presented in the paper, most of noises are eliminated, and contrast between defect areas and normal area is improved greatly; in addition, by using the binary picture of the processed figure, the continuous defect edges can be extracted, all of which show the validity of the algorithm. The paper provides a well-performing image enhancement algorithm for the infrared thermography.
Fluorescence Imaging Reveals Surface Contamination
NASA Technical Reports Server (NTRS)
Schirato, Richard; Polichar, Raulf
1992-01-01
In technique to detect surface contamination, object inspected illuminated by ultraviolet light to make contaminants fluoresce; low-light-level video camera views fluorescence. Image-processing techniques quantify distribution of contaminants. If fluorescence of material expected to contaminate surface is not intense, tagged with low concentration of dye.
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
MEMS-based system and image processing strategy for epiretinal prosthesis.
Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong
2015-01-01
Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.
Design and implementation of non-linear image processing functions for CMOS image sensor
NASA Astrophysics Data System (ADS)
Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel
2012-11-01
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Benedek, C; Descombes, X; Zerubia, J
2012-01-01
In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.
NASA Astrophysics Data System (ADS)
Nyman, G.; Häkkinen, J.; Koivisto, E.-M.; Leisti, T.; Lindroos, P.; Orenius, O.; Virtanen, T.; Vuori, T.
2010-01-01
Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective image quality attributes to each test image were recorded. The use of positive and negative image quality attributes by the experimental subjects suggested a significant difference between the subjective spaces of low and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test images against their corresponding, average subjective attribute vector length data. The findings demonstrate the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable quality levels. We discuss the implications of these findings for the development of sensitive performance measures and methods in profiling image processing systems and their components, especially at high image quality levels.
Detecting Edges in Images by Use of Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve
2003-01-01
A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images
Image wavelet decomposition and applications
NASA Technical Reports Server (NTRS)
Treil, N.; Mallat, S.; Bajcsy, R.
1989-01-01
The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.
McBain, Ryan; Norton, Daniel; Chen, Yue
2010-09-01
While schizophrenia patients are impaired at facial emotion perception, the role of basic visual processing in this deficit remains relatively unclear. We examined emotion perception when spatial frequency content of facial images was manipulated via high-pass and low-pass filtering. Unlike controls (n=29), patients (n=30) perceived images with low spatial frequencies as more fearful than those without this information, across emotional salience levels. Patients also perceived images with high spatial frequencies as happier. In controls, this effect was found only at low emotional salience. These results indicate that basic visual processing has an amplified modulatory effect on emotion perception in schizophrenia. (c) 2010 Elsevier B.V. All rights reserved.
Java Image I/O for VICAR, PDS, and ISIS
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Levoe, Steven R.
2011-01-01
This library, written in Java, supports input and output of images and metadata (labels) in the VICAR, PDS image, and ISIS-2 and ISIS-3 file formats. Three levels of access exist. The first level comprises the low-level, direct access to the file. This allows an application to read and write specific image tiles, lines, or pixels and to manipulate the label data directly. This layer is analogous to the C-language "VICAR Run-Time Library" (RTL), which is the image I/O library for the (C/C++/Fortran) VICAR image processing system from JPL MIPL (Multimission Image Processing Lab). This low-level library can also be used to read and write labeled, uncompressed images stored in formats similar to VICAR, such as ISIS-2 and -3, and a subset of PDS (image format). The second level of access involves two codecs based on Java Advanced Imaging (JAI) to provide access to VICAR and PDS images in a file-format-independent manner. JAI is supplied by Sun Microsystems as an extension to desktop Java, and has a number of codecs for formats such as GIF, TIFF, JPEG, etc. Although Sun has deprecated the codec mechanism (replaced by IIO), it is still used in many places. The VICAR and PDS codecs allow any program written using the JAI codec spec to use VICAR or PDS images automatically, with no specific knowledge of the VICAR or PDS formats. Support for metadata (labels) is included, but is format-dependent. The PDS codec, when processing PDS images with an embedded VIAR label ("dual-labeled images," such as used for MER), presents the VICAR label in a new way that is compatible with the VICAR codec. The third level of access involves VICAR, PDS, and ISIS Image I/O plugins. The Java core includes an "Image I/O" (IIO) package that is similar in concept to the JAI codec, but is newer and more capable. Applications written to the IIO specification can use any image format for which a plug-in exists, with no specific knowledge of the format itself.
The Hyper Suprime-Cam software pipeline
NASA Astrophysics Data System (ADS)
Bosch, James; Armstrong, Robert; Bickerton, Steven; Furusawa, Hisanori; Ikeda, Hiroyuki; Koike, Michitaro; Lupton, Robert; Mineo, Sogo; Price, Paul; Takata, Tadafumi; Tanaka, Masayuki; Yasuda, Naoki; AlSayyad, Yusra; Becker, Andrew C.; Coulton, William; Coupon, Jean; Garmilla, Jose; Huang, Song; Krughoff, K. Simon; Lang, Dustin; Leauthaud, Alexie; Lim, Kian-Tat; Lust, Nate B.; MacArthur, Lauren A.; Mandelbaum, Rachel; Miyatake, Hironao; Miyazaki, Satoshi; Murata, Ryoma; More, Surhud; Okura, Yuki; Owen, Russell; Swinbank, John D.; Strauss, Michael A.; Yamada, Yoshihiko; Yamanoi, Hitomi
2018-01-01
In this paper, we describe the optical imaging data processing pipeline developed for the Subaru Telescope's Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope's Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrending and image characterizations.
ESARR: enhanced situational awareness via road sign recognition
NASA Astrophysics Data System (ADS)
Perlin, V. E.; Johnson, D. B.; Rohde, M. M.; Lupa, R. M.; Fiorani, G.; Mohammad, S.
2010-04-01
The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign images. In this paper, ESARR development progress will be reported on, including the design and architecture, image processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based directional road-sign detection and interpretation system will be described along with the challenges and progress in overcoming them.
Image-Based Grouping during Binocular Rivalry Is Dictated by Eye-Of-Origin
Stuit, Sjoerd M.; Paffen, Chris L. E.; van der Smagt, Maarten J.; Verstraten, Frans A. J.
2014-01-01
Prolonged viewing of dichoptically presented images with different content results in perceptual alternations known as binocular rivalry. This phenomenon is thought to be the result of competition at a local level, where local rivalry zones interact to give rise to a single, global dominant percept. Certain perceived combinations that result from this local competition are known to last longer than others, which is referred to as grouping during binocular rivalry. In recent years, the phenomenon has been suggested to be the result of competition at both eye- and image-based processing levels, although the exact contribution from each level remains elusive. Here we use a paradigm designed specifically to quantify the contribution of eye- and image-based processing to grouping during rivalry. In this paradigm we used sine-wave gratings as well as upright and inverted faces, with and without binocular disparity-based occlusion. These stimuli and conditions were used because they are known to result in processing at different stages throughout the visual processing hierarchy. Specifically, more complex images were included in order to maximize the potential contribution of image-based grouping. In spite of this, our results show that increasing image complexity did not lead to an increase in the contribution of image-based processing to grouping during rivalry. In fact, the results show that grouping was primarily affected by the eye-of-origin of the image parts, irrespective of stimulus type. We suggest that image content affects grouping during binocular rivalry at low-level processing stages, where it is intertwined with eye-of-origin information. PMID:24987847
An abuttable CCD imager for visible and X-ray focal plane arrays
NASA Technical Reports Server (NTRS)
Burke, Barry E.; Mountain, Robert W.; Harrison, David C.; Bautz, Marshall W.; Doty, John P.
1991-01-01
A frame-transfer silicon charge-coupled-device (CCD) imager has been developed that can be closely abutted to other imagers on three sides of the imaging array. It is intended for use in multichip arrays. The device has 420 x 420 pixels in the imaging and frame-store regions and is constructed using a three-phase triple-polysilicon process. Particular emphasis has been placed on achieving low-noise charge detection for low-light-level imaging in the visible and maximum energy resolution for X-ray spectroscopic applications. Noise levels of 6 electrons at 1-MHz and less than 3 electrons at 100-kHz data rates have been achieved. Imagers have been fabricated on 1000-Ohm-cm material to maximize quantum efficiency and minimize split events in the soft X-ray regime.
NASA Astrophysics Data System (ADS)
Ying, Changsheng; Zhao, Peng; Li, Ye
2018-01-01
The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.
NASA Astrophysics Data System (ADS)
Zareei, Zahra; Navi, Keivan; Keshavarziyan, Peiman
2018-03-01
In this paper, three novel low-power and high-speed 1-bit inexact Full Adder cell designs are presented based on current mode logic in 32 nm carbon nanotube field effect transistor technology for the first time. The circuit-level figures of merits, i.e. power, delay and power-delay product as well as application-level metric such as error distance, are considered to assess the efficiency of the proposed cells over their counterparts. The effect of voltage scaling and temperature variation on the proposed cells is studied using HSPICE tool. Moreover, using MATLAB tool, the peak signal to noise ratio of the proposed cells is evaluated in an image-processing application referred to as motion detector. Simulation results confirm the efficiency of the proposed cells.
Gelbard-Sagiv, Hagar; Faivre, Nathan; Mudrik, Liad; Koch, Christof
2016-01-01
The scope and limits of unconscious processing are a matter of ongoing debate. Lately, continuous flash suppression (CFS), a technique for suppressing visual stimuli, has been widely used to demonstrate surprisingly high-level processing of invisible stimuli. Yet, recent studies showed that CFS might actually allow low-level features of the stimulus to escape suppression and be consciously perceived. The influence of such low-level awareness on high-level processing might easily go unnoticed, as studies usually only probe the visibility of the feature of interest, and not that of lower-level features. For instance, face identity is held to be processed unconsciously since subjects who fail to judge the identity of suppressed faces still show identity priming effects. Here we challenge these results, showing that such high-level priming effects are indeed induced by faces whose identity is invisible, but critically, only when a lower-level feature, such as color or location, is visible. No evidence for identity processing was found when subjects had no conscious access to any feature of the suppressed face. These results suggest that high-level processing of an image might be enabled by-or co-occur with-conscious access to some of its low-level features, even when these features are not relevant to the processed dimension. Accordingly, they call for further investigation of lower-level awareness during CFS, and reevaluation of other unconscious high-level processing findings.
The Hyper Suprime-Cam software pipeline
Bosch, James; Armstrong, Robert; Bickerton, Steven; ...
2017-10-12
Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less
The Hyper Suprime-Cam software pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosch, James; Armstrong, Robert; Bickerton, Steven
Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less
Abnormalities of Object Visual Processing in Body Dysmorphic Disorder
Feusner, Jamie D.; Hembacher, Emily; Moller, Hayley; Moody, Teena D.
2013-01-01
Background Individuals with body dysmorphic disorder may have perceptual distortions for their appearance. Previous studies suggest imbalances in detailed relative to configural/holistic visual processing when viewing faces. No study has investigated the neural correlates of processing non-symptom-related stimuli. The objective of this study was to determine whether individuals with body dysmorphic disorder have abnormal patterns of brain activation when viewing non-face/non-body object stimuli. Methods Fourteen medication-free participants with DSM-IV body dysmorphic disorder and 14 healthy controls participated. We performed functional magnetic resonance imaging while participants matched photographs of houses that were unaltered, contained only high spatial frequency (high detail) information, or only low spatial frequency (low detail) information. The primary outcome was group differences in blood oxygen level-dependent signal changes. Results The body dysmorphic disorder group showed lesser activity in the parahippocampal gyrus, lingual gyrus, and precuneus for low spatial frequency images. There were greater activations in medial prefrontal regions for high spatial frequency images, although no significant differences when compared to a low-level baseline. Greater symptom severity was associated with lesser activity in dorsal occipital cortex and ventrolateral prefrontal cortex for normal and high spatial frequency images. Conclusions Individuals with body dysmorphic disorder have abnormal brain activation patterns when viewing objects. Hypoactivity in visual association areas for configural and holistic (low detail) elements and abnormal allocation of prefrontal systems for details is consistent with a model of imbalances in global vs. local processing. This may occur not only for appearance but also for general stimuli unrelated to their symptoms. PMID:21557897
Automated Analysis of CT Images for the Inspection of Hardwood Logs
Harbin Li; A. Lynn Abbott; Daniel L. Schmoldt
1996-01-01
This paper investigates several classifiers for labeling internal features of hardwood logs using computed tomography (CT) images. A primary motivation is to locate and classify internal defects so that an optimal cutting strategy can be chosen. Previous work has relied on combinations of low-level processing, image segmentation, autoregressive texture modeling, and...
When Art Moves the Eyes: A Behavioral and Eye-Tracking Study
Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2012-01-01
The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007
When art moves the eyes: a behavioral and eye-tracking study.
Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2012-01-01
The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception.
NASA Astrophysics Data System (ADS)
Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian
2018-06-01
Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.
Representations of Spectral Differences between Vowels in Tonotopic Regions of Auditory Cortex
ERIC Educational Resources Information Center
Fisher, Julia
2017-01-01
This work examines the link between low-level cortical acoustic processing and higher-level cortical phonemic processing. Specifically, using functional magnetic resonance imaging, it looks at 1) whether or not the vowels [alpha] and [i] are distinguishable in regions of interest defined by the first two resonant frequencies (formants) of those…
Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven
2012-01-01
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921
Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi
2015-09-01
Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.
NASA Astrophysics Data System (ADS)
Li, Wenzhuo; Sun, Kaimin; Li, Deren; Bai, Ting
2016-07-01
Unmanned aerial vehicle (UAV) remote sensing technology has come into wide use in recent years. The poor stability of the UAV platform, however, produces more inconsistencies in hue and illumination among UAV images than other more stable platforms. Image dodging is a process used to reduce these inconsistencies caused by different imaging conditions. We propose an algorithm for automatic image dodging of UAV images using two-dimensional radiometric spatial attributes. We use object-level image smoothing to smooth foreground objects in images and acquire an overall reference background image by relative radiometric correction. We apply the Contourlet transform to separate high- and low-frequency sections for every single image, and replace the low-frequency section with the low-frequency section extracted from the corresponding region in the overall reference background image. We apply the inverse Contourlet transform to reconstruct the final dodged images. In this process, a single image must be split into reasonable block sizes with overlaps due to large pixel size. Experimental mosaic results show that our proposed method reduces the uneven distribution of hue and illumination. Moreover, it effectively eliminates dark-bright interstrip effects caused by shadows and vignetting in UAV images while maximally protecting image texture information.
Riffel, Philipp; Haubenreisser, Holger; Meyer, Mathias; Sudarski, Sonja; Morelli, John N; Schmidt, Bernhard; Schoenberg, Stefan O; Henzler, Thomas
2016-04-01
Calculated monoenergetic ultra-low keV datasets did not lead to improved contrast-to-noise ratio (CNR) due to the dramatic increase in image noise. The aim of the present study was to evaluate the objective image quality of ultra-low keV monoenergetic images (MEIs) calculated from carotid DECT angiography data with a new monoenergetic imaging algorithm using a frequency-split technique. 20 patients (12 male; mean age 53±17 years) were retrospectively analyzed. MEIs from 40 to 120 keV were reconstructed using the monoenergetic split frequency approach (MFSA). Additionally MEIs were reconstructed for 40 and 50 keV using a conventional monoenergetic (CM) software application. Signal intensity, noise, signal-to-noise ratio (SNR) and CNR were assessed in the basilar, common, internal carotid arteries. Ultra-low keV MEIs at 40 keV and 50 keV demonstrated highest vessel attenuation, significantly greater than those of the polyenergetic images (PEI) (all p-values <0.05). The highest SNR level and CNR level was found at 40 keV and 50 keV (all p-values <0.05). MEIs with MFSA showed significantly lower noise levels than those processed with CM (all p-values <0.05) and no significant differences in vessel attenuation (p>0.05). Thus MEIs with MFSA showed significantly higher SNR and CNR compared to MEIs with CM. Combining the lower spatial frequency stack for contrast at low keV levels with the high spatial frequency stack for noise at high keV levels (frequency-split technique) leads to improved image quality of ultra-low keV monoenergetic DECT datasets when compared to previous monoenergetic reconstruction techniques without the frequency-split technique. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Digital video system for on-line portal verification
NASA Astrophysics Data System (ADS)
Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott
1990-07-01
A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.
Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.
Berezutskaya, Julia; Freudenburg, Zachary V; Güçlü, Umut; van Gerven, Marcel A J; Ramsey, Nick F
2017-08-16
Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain. SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension. Copyright © 2017 the authors 0270-6474/17/377906-15$15.00/0.
IR CMOS: near infrared enhanced digital imaging (Presentation Recording)
NASA Astrophysics Data System (ADS)
Pralle, Martin U.; Carey, James E.; Joy, Thomas; Vineis, Chris J.; Palsule, Chintamani
2015-08-01
SiOnyx has demonstrated imaging at light levels below 1 mLux (moonless starlight) at video frame rates with a 720P CMOS image sensor in a compact, low latency camera. Low light imaging is enabled by the combination of enhanced quantum efficiency in the near infrared together with state of the art low noise image sensor design. The quantum efficiency enhancements are achieved by applying Black Silicon, SiOnyx's proprietary ultrafast laser semiconductor processing technology. In the near infrared, silicon's native indirect bandgap results in low absorption coefficients and long absorption lengths. The Black Silicon nanostructured layer fundamentally disrupts this paradigm by enhancing the absorption of light within a thin pixel layer making 5 microns of silicon equivalent to over 300 microns of standard silicon. This results in a demonstrate 10 fold improvements in near infrared sensitivity over incumbent imaging technology while maintaining complete compatibility with standard CMOS image sensor process flows. Applications include surveillance, nightvision, and 1064nm laser see spot. Imaging performance metrics will be discussed. Demonstrated performance characteristics: Pixel size : 5.6 and 10 um Array size: 720P/1.3Mpix Frame rate: 60 Hz Read noise: 2 ele/pixel Spectral sensitivity: 400 to 1200 nm (with 10x QE at 1064nm) Daytime imaging: color (Bayer pattern) Nighttime imaging: moonless starlight conditions 1064nm laser imaging: daytime imaging out to 2Km
Quantitative luminescence imaging system
Erwin, D.N.; Kiel, J.L.; Batishko, C.R.; Stahl, K.A.
1990-08-14
The QLIS images and quantifies low-level chemiluminescent reactions in an electromagnetic field. It is capable of real time nonperturbing measurement and simultaneous recording of many biochemical and chemical reactions such as luminescent immunoassays or enzyme assays. The system comprises image transfer optics, a low-light level digitizing camera with image intensifying microchannel plates, an image process or, and a control computer. The image transfer optics may be a fiber image guide with a bend, or a microscope, to take the light outside of the RF field. Output of the camera is transformed into a localized rate of cumulative digitalized data or enhanced video display or hard-copy images. The system may be used as a luminescent microdosimetry device for radiofrequency or microwave radiation, as a thermal dosimeter, or in the dosimetry of ultra-sound (sonoluminescence) or ionizing radiation. It provides a near-real-time system capable of measuring the extremely low light levels from luminescent reactions in electromagnetic fields in the areas of chemiluminescence assays and thermal microdosimetry, and is capable of near-real-time imaging of the sample to allow spatial distribution analysis of the reaction. It can be used to instrument three distinctly different irradiation configurations, comprising (1) RF waveguide irradiation of a small Petri-dish-shaped sample cell, (2) RF irradiation of samples in a microscope for the microscopic imaging and measurement, and (3) RF irradiation of small to human body-sized samples in an anechoic chamber. 22 figs.
Quantitative luminescence imaging system
Erwin, David N.; Kiel, Johnathan L.; Batishko, Charles R.; Stahl, Kurt A.
1990-01-01
The QLIS images and quantifies low-level chemiluminescent reactions in an electromagnetic field. It is capable of real time nonperturbing measurement and simultaneous recording of many biochemical and chemical reactions such as luminescent immunoassays or enzyme assays. The system comprises image transfer optics, a low-light level digitizing camera with image intensifying microchannel plates, an image process or, and a control computer. The image transfer optics may be a fiber image guide with a bend, or a microscope, to take the light outside of the RF field. Output of the camera is transformed into a localized rate of cumulative digitalized data or enhanced video display or hard-copy images. The system may be used as a luminescent microdosimetry device for radiofrequency or microwave radiation, as a thermal dosimeter, or in the dosimetry of ultra-sound (sonoluminescence) or ionizing radiation. It provides a near-real-time system capable of measuring the extremely low light levels from luminescent reactions in electromagnetic fields in the areas of chemiluminescence assays and thermal microdosimetry, and is capable of near-real-time imaging of the sample to allow spatial distribution analysis of the reaction. It can be used to instrument three distinctly different irradiation configurations, comprising (1) RF waveguide irradiation of a small Petri-dish-shaped sample cell, (2) RF irradiation of samples in a microscope for the microscopie imaging and measurement, and (3) RF irradiation of small to human body-sized samples in an anechoic chamber.
Human Information Processing in the Dynamic Environment (HIPDE)
2008-01-01
Albery, 1990) and image mental rotation/orientation (Nethus, et al., 1993). However, because SD typically occurs at low G levels, there is little...cuity C om plex D ecision M aking A ccuracy C om plex D ecision M aking R T C om plex D ecision M aking E fficiency Tracking S low M...cerebral tissue frequently continues to persist even at relatively low Gz levels (including accelerations as low as 3 Gz) during long, multiple peak
Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation
NASA Astrophysics Data System (ADS)
Bourree, Loig E.
2014-05-01
Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Visual analysis of trash bin processing on garbage trucks in low resolution video
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Loibner, Gernot
2015-03-01
We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.
An orthogonal oriented quadrature hexagonal image pyramid
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.
1987-01-01
An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.
PIFEX: An advanced programmable pipelined-image processor
NASA Technical Reports Server (NTRS)
Gennery, D. B.; Wilcox, B.
1985-01-01
PIFEX is a pipelined-image processor being built in the JPL Robotics Lab. It will operate on digitized raster-scanned images (at 60 frames per second for images up to about 300 by 400 and at lesser rates for larger images), performing a variety of operations simultaneously under program control. It thus is a powerful, flexible tool for image processing and low-level computer vision. It also has applications in other two-dimensional problems such as route planning for obstacle avoidance and the numerical solution of two-dimensional partial differential equations (although its low numerical precision limits its use in the latter field). The concept and design of PIFEX are described herein, and some examples of its use are given.
Low-level processing for real-time image analysis
NASA Technical Reports Server (NTRS)
Eskenazi, R.; Wilf, J. M.
1979-01-01
A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.
Knowledge-driven information mining in remote-sensing image archives
NASA Astrophysics Data System (ADS)
Datcu, M.; Seidel, K.; D'Elia, S.; Marchetti, P. G.
2002-05-01
Users in all domains require information or information-related services that are focused, concise, reliable, low cost and timely and which are provided in forms and formats compatible with the user's own activities. In the current Earth Observation (EO) scenario, the archiving centres generally only offer data, images and other "low level" products. The user's needs are being only partially satisfied by a number of, usually small, value-adding companies applying time-consuming (mostly manual) and expensive processes relying on the knowledge of experts to extract information from those data or images.
Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang
2015-04-01
Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision the broad utility of the framework for diverse problems across different length scales and imaging methods.
Evidence of estrogen modulation on memory processes for emotional content in healthy young women.
Pompili, Assunta; Arnone, Benedetto; D'Amico, Mario; Federico, Paolo; Gasbarri, Antonella
2016-03-01
It is well accepted that emotional content can affect memory, interacting with the encoding and consolidation processes. The aim of the present study was to verify the effects of estrogens in the interplay of cognition and emotion. Images from the International Affective Pictures System, based on valence (pleasant, unpleasant and neutral), maintaining arousal constant, were viewed passively by two groups of young women in different cycle phases: a periovulatory group (PO), characterized by high level of estrogens and low level of progesterone, and an early follicular group (EF), characterized by low levels of both estrogens and progesterone. The electrophysiological responses to images were measured, and P300 peak was considered. One week later, long-term memory was tested by means of free recall. Intra-group analysis displayed that PO woman had significantly better memory for positive images, while EF women showed significantly better memory for negative images. The comparison between groups revealed that women in the PO phase had better memory performance for positive pictures than women in the EF phase, while no significant differences were found for negative and neutral pictures. According to the free recall results, the subjects in the PO group showed greater P300 amplitude, and shorter latency, for pleasant images compared with women in the EF group. Our results showed that the physiological hormonal fluctuation of estrogens during the menstrual cycle can influence memory, at the time of encoding, during the processing of emotional information. Copyright © 2015 Elsevier Ltd. All rights reserved.
Exploring an optimal wavelet-based filter for cryo-ET imaging.
Huang, Xinrui; Li, Sha; Gao, Song
2018-02-07
Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
A fuzzy structural matching scheme for space robotics vision
NASA Technical Reports Server (NTRS)
Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka
1994-01-01
In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.
NASA Astrophysics Data System (ADS)
Apelian, Clément; Gastaud, Clément; Boccara, A. Claude
2017-02-01
For a large number of cancer surgeries, the lack of reliable intraoperative diagnosis leads to reoperations or bad outcomes for the patients. To deliver better diagnosis, we developed Dynamic Full Field OCT (D-FFOCT) as a complement to FFOCT. FFOCT already presents interesting results for cancer diagnosis e.g. Mohs surgery and reaching 96% accuracy on prostate cancer. D-FFOCT accesses the dynamic processes of metabolism and gives new tools to diagnose the state of a tissue at the cellular level to complement FFOCT contrast. We developed a processing framework that intends to maximize the information provided by the FFOCT technology as well as D-FFOCT and synthetize this as a meaningful image. We use different time processing to generate metrics (standard deviation of time signals, decorrelation times and more) and spatial processing to sort out structures and the corresponding imaging modality, which is the most appropriate. Sorting was achieved through quadratic discriminant analysis in a N-dimension parametric space corresponding to our metrics. Combining the best imaging modalities for each structure leads to a rich morphology image. This image displaying the morphology is then colored to represent the dynamic behavior of these structures (slow or fast) and to be quickly analyzed by doctors. Therefore, we achieved a micron resolved image, rich of both FFOCT ability of imaging fixed and highly backscattering structures as well as D-FFOCT ability of imaging low level scattering cellular level details. We believe that this morphological contrast close to histology and the dynamic behavior contrast will push forward the limits of intraoperative diagnosis further on.
Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.
He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P
2013-09-18
The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.
Image processing operations achievable with the Microchannel Spatial Light Modulator
NASA Astrophysics Data System (ADS)
Warde, C.; Fisher, A. D.; Thackara, J. I.; Weiss, A. M.
1980-01-01
The Microchannel Spatial Light Modulator (MSLM) is a versatile, optically-addressed, highly-sensitive device that is well suited for low-light-level, real-time, optical information processing. It consists of a photocathode, a microchannel plate (MCP), a planar acceleration grid, and an electro-optic plate in proximity focus. A framing rate of 20 Hz with full modulation depth, and 100 Hz with 20% modulation depth has been achieved in a vacuum-demountable LiTaO3 device. A halfwave exposure sensitivity of 2.2 mJ/sq cm and an optical information storage time of more than 2 months have been achieved in a similar gridless LiTaO3 device employing a visible photocathode. Image processing operations such as analog and digital thresholding, real-time image hard clipping, contrast reversal, contrast enhancement, image addition and subtraction, and binary-level logic operations such as AND, OR, XOR, and NOR can be achieved with this device. This collection of achievable image processing characteristics makes the MSLM potentially useful for a number of smart sensor applications.
Buschow, Christian; Charo, Jehad; Anders, Kathleen; Loddenkemper, Christoph; Jukica, Ana; Alsamah, Wisam; Perez, Cynthia; Willimsky, Gerald; Blankenstein, Thomas
2010-03-15
Visualizing oncogene/tumor Ag expression by noninvasive imaging is of great interest for understanding processes of tumor development and therapy. We established transgenic (Tg) mice conditionally expressing a fusion protein of the SV40 large T Ag and luciferase (TagLuc) that allows monitoring of oncogene/tumor Ag expression by bioluminescent imaging upon Cre recombinase-mediated activation. Independent of Cre-mediated recombination, the TagLuc gene was expressed at low levels in different tissues, probably due to the leakiness of the stop cassette. The level of spontaneous TagLuc expression, detected by bioluminescent imaging, varied between the different Tg lines, depended on the nature of the Tg expression cassette, and correlated with Tag-specific CTL tolerance. Following liver-specific Cre-loxP site-mediated excision of the stop cassette that separated the promoter from the TagLuc fusion gene, hepatocellular carcinoma development was visualized. The ubiquitous low level TagLuc expression caused the failure of transferred effector T cells to reject Tag-expressing tumors rather than causing graft-versus-host disease. This model may be useful to study different levels of tolerance, monitor tumor development at an early stage, and rapidly visualize the efficacy of therapeutic intervention versus potential side effects of low-level Ag expression in normal tissues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lance, C.; Eather, R.
1993-09-30
A low-light-level monochromatic imaging system was designed and fabricated which was optimized to detect and record optical emissions associated with high-power rf heating of the ionosphere. The instrument is capable of detecting very low intensities, of the order of 1 Rayleigh, from typical ionospheric atomic and molecular emissions. This is achieved through co-adding of ON images during heater pulses and subtraction of OFF (background) images between pulses. Images can be displayed and analyzed in real time and stored in optical disc for later analysis. Full image processing software is provided which was customized for this application and uses menu ormore » mouse user interaction.« less
Registration of adaptive optics corrected retinal nerve fiber layer (RNFL) images
Ramaswamy, Gomathy; Lombardo, Marco; Devaney, Nicholas
2014-01-01
Glaucoma is the leading cause of preventable blindness in the western world. Investigation of high-resolution retinal nerve fiber layer (RNFL) images in patients may lead to new indicators of its onset. Adaptive optics (AO) can provide diffraction-limited images of the retina, providing new opportunities for earlier detection of neuroretinal pathologies. However, precise processing is required to correct for three effects in sequences of AO-assisted, flood-illumination images: uneven illumination, residual image motion and image rotation. This processing can be challenging for images of the RNFL due to their low contrast and lack of clearly noticeable features. Here we develop specific processing techniques and show that their application leads to improved image quality on the nerve fiber bundles. This in turn improves the reliability of measures of fiber texture such as the correlation of Gray-Level Co-occurrence Matrix (GLCM). PMID:24940551
Registration of adaptive optics corrected retinal nerve fiber layer (RNFL) images.
Ramaswamy, Gomathy; Lombardo, Marco; Devaney, Nicholas
2014-06-01
Glaucoma is the leading cause of preventable blindness in the western world. Investigation of high-resolution retinal nerve fiber layer (RNFL) images in patients may lead to new indicators of its onset. Adaptive optics (AO) can provide diffraction-limited images of the retina, providing new opportunities for earlier detection of neuroretinal pathologies. However, precise processing is required to correct for three effects in sequences of AO-assisted, flood-illumination images: uneven illumination, residual image motion and image rotation. This processing can be challenging for images of the RNFL due to their low contrast and lack of clearly noticeable features. Here we develop specific processing techniques and show that their application leads to improved image quality on the nerve fiber bundles. This in turn improves the reliability of measures of fiber texture such as the correlation of Gray-Level Co-occurrence Matrix (GLCM).
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
Varying ultrasound power level to distinguish surgical instruments and tissue.
Ren, Hongliang; Anuraj, Banani; Dupont, Pierre E
2018-03-01
We investigate a new framework of surgical instrument detection based on power-varying ultrasound images with simple and efficient pixel-wise intensity processing. Without using complicated feature extraction methods, we identified the instrument with an estimated optimal power level and by comparing pixel values of varying transducer power level images. The proposed framework exploits the physics of ultrasound imaging system by varying the transducer power level to effectively distinguish metallic surgical instruments from tissue. This power-varying image-guidance is motivated from our observations that ultrasound imaging at different power levels exhibit different contrast enhancement capabilities between tissue and instruments in ultrasound-guided robotic beating-heart surgery. Using lower transducer power levels (ranging from 40 to 75% of the rated lowest ultrasound power levels of the two tested ultrasound scanners) can effectively suppress the strong imaging artifacts from metallic instruments and thus, can be utilized together with the images from normal transducer power levels to enhance the separability between instrument and tissue, improving intraoperative instrument tracking accuracy from the acquired noisy ultrasound volumetric images. We performed experiments in phantoms and ex vivo hearts in water tank environments. The proposed multi-level power-varying ultrasound imaging approach can identify robotic instruments of high acoustic impedance from low-signal-to-noise-ratio ultrasound images by power adjustments.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
Image segmentation via foreground and background semantic descriptors
NASA Astrophysics Data System (ADS)
Yuan, Ding; Qiang, Jingjing; Yin, Jihao
2017-09-01
In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.
Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight
NASA Technical Reports Server (NTRS)
Suorsa, Raymond; Sridhar, Banavar
1991-01-01
A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.
Kovalenko, Lyudmyla Y; Chaumon, Maximilien; Busch, Niko A
2012-07-01
Semantic processing of verbal and visual stimuli has been investigated in semantic violation or semantic priming paradigms in which a stimulus is either related or unrelated to a previously established semantic context. A hallmark of semantic priming is the N400 event-related potential (ERP)--a deflection of the ERP that is more negative for semantically unrelated target stimuli. The majority of studies investigating the N400 and semantic integration have used verbal material (words or sentences), and standardized stimulus sets with norms for semantic relatedness have been published for verbal but not for visual material. However, semantic processing of visual objects (as opposed to words) is an important issue in research on visual cognition. In this study, we present a set of 800 pairs of semantically related and unrelated visual objects. The images were rated for semantic relatedness by a sample of 132 participants. Furthermore, we analyzed low-level image properties and matched the two semantic categories according to these features. An ERP study confirmed the suitability of this image set for evoking a robust N400 effect of semantic integration. Additionally, using a general linear modeling approach of single-trial data, we also demonstrate that low-level visual image properties and semantic relatedness are in fact only minimally overlapping. The image set is available for download from the authors' website. We expect that the image set will facilitate studies investigating mechanisms of semantic and contextual processing of visual stimuli.
Liu, Haiyun; Tian, Tian; Ji, Dandan; Ren, Na; Ge, Shenguang; Yan, Mei; Yu, Jinghua
2016-11-15
In situ imaging of miRNA in living cells could help us to monitor the miRNA expression in real time and obtain accurate information for studying miRNA related bioprocesses and disease. Given the low-level expression of miRNA, amplification strategies for intracellular miRNA are imperative. Here, we propose an amplification strategy with a non-destructive enzyme-free manner in living cells using catalyzed hairpin assembly (CHA) based on graphene oxide (GO) for cellular miRNA imaging. The enzyme-free CHA exhibits stringent recognition and excellent signal amplification of miRNA in the living cells. GO is a good candidate as a fluorescence quencher and cellular carrier. Taking the advantages of the CHA and GO, we can monitor the miRNA at low level in living cells with a simple, sensitive and real-time manner. Finally, imaging of miRNAs in the different expression cells is realized. The novel method could supply an effective tool to visualize intracellular low-level miRNAs and help us to further understand the role of miRNAs in cellular processes. Copyright © 2016 Elsevier B.V. All rights reserved.
Accelerated dynamic EPR imaging using fast acquisition and compressive recovery
NASA Astrophysics Data System (ADS)
Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.
2016-12-01
Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.
Research study on stellar X-ray imaging experiment, volume 1
NASA Technical Reports Server (NTRS)
Wilson, H. H.; Vanspeybroeck, L. P.
1972-01-01
The use of microchannel plates as focal plane readout devices and the evaluation of mirrors for X-ray telescopes applied to stellar X-ray imaging is discussed. The microchannel plate outputs were either imaged on a phosphor screen which was viewed by a low light level vidicon or on a wire array which was read out by digitally processing the output of a charge division network attached to the wires. A service life test which was conducted on two image intensifiers is described.
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.
Textured digital elevation model formation from low-cost UAV LADAR/digital image data
NASA Astrophysics Data System (ADS)
Bybee, Taylor C.; Budge, Scott E.
2015-05-01
Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.
A SPAD-based 3D imager with in-pixel TDC for 145ps-accuracy ToF measurement
NASA Astrophysics Data System (ADS)
Vornicu, I.; Carmona-Galán, R.; Rodríguez-Vázquez, Á.
2015-03-01
The design and measurements of a CMOS 64 × 64 Single-Photon Avalanche-Diode (SPAD) array with in-pixel Time-to-Digital Converter (TDC) are presented. This paper thoroughly describes the imager at architectural and circuit level with particular emphasis on the characterization of the SPAD-detector ensemble. It is aimed to 2D imaging and 3D image reconstruction in low light environments. It has been fabricated in a standard 0.18μm CMOS process, i. e. without high voltage or low noise features. In these circumstances, we are facing a high number of dark counts and low photon detection efficiency. Several techniques have been applied to ensure proper functionality, namely: i) time-gated SPAD front-end with fast active-quenching/recharge circuit featuring tunable dead-time, ii) reverse start-stop scheme, iii) programmable time resolution of the TDC based on a novel pseudo-differential voltage controlled ring oscillator with fast start-up, iv) a global calibration scheme against temperature and process variation. Measurements results of individual SPAD-TDC ensemble jitter, array uniformity and time resolution programmability are also provided.
A Scientific Workflow Platform for Generic and Scalable Object Recognition on Medical Images
NASA Astrophysics Data System (ADS)
Möller, Manuel; Tuot, Christopher; Sintek, Michael
In the research project THESEUS MEDICO we aim at a system combining medical image information with semantic background knowledge from ontologies to give clinicians fully cross-modal access to biomedical image repositories. Therefore joint efforts have to be made in more than one dimension: Object detection processes have to be specified in which an abstraction is performed starting from low-level image features across landmark detection utilizing abstract domain knowledge up to high-level object recognition. We propose a system based on a client-server extension of the scientific workflow platform Kepler that assists the collaboration of medical experts and computer scientists during development and parameter learning.
Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.
Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil
2018-01-25
Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.
NASA Astrophysics Data System (ADS)
Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan
2015-03-01
Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.
Wavelet compression of noisy tomographic images
NASA Astrophysics Data System (ADS)
Kappeler, Christian; Mueller, Stefan P.
1995-09-01
3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
Miyamoto, N; Ishikawa, M; Sutherland, K; Suzuki, R; Matsuura, T; Takao, S; Toramatsu, C; Nihongi, H; Shimizu, S; Onimaru, R; Umegaki, K; Shirato, H
2012-06-01
In the real-time tumor-tracking radiotherapy system, fiducial markers are detected by X-ray fluoroscopy. The fluoroscopic parameters should be optimized as low as possible in order to reduce unnecessary imaging dose. However, the fiducial markers could not be recognized due to effect of statistical noise in low dose imaging. Image processing is envisioned to be a solution to improve image quality and to maintain tracking accuracy. In this study, a recursive image filter adapted to target motion is proposed. A fluoroscopy system was used for the experiment. A spherical gold marker was used as a fiducial marker. About 450 fluoroscopic images of the marker were recorded. In order to mimic respiratory motion of the marker, the images were shifted sequentially. The tube voltage, current and exposure duration were fixed at 65 kV, 50 mA and 2.5 msec as low dose imaging condition, respectively. The tube current was 100 mA as high dose imaging. A pattern recognition score (PRS) ranging from 0 to 100 and image registration error were investigated by performing template pattern matching to each sequential image. The results with and without image processing were compared. In low dose imaging, theimage registration error and the PRS without the image processing were 2.15±1.21 pixel and 46.67±6.40, respectively. Those with the image processing were 1.48±0.82 pixel and 67.80±4.51, respectively. There was nosignificant difference in the image registration error and the PRS between the results of low dose imaging with the image processing and that of high dose imaging without the image processing. The results showed that the recursive filter was effective in order to maintain marker tracking stability and accuracy in low dose fluoroscopy. © 2012 American Association of Physicists in Medicine.
Active confocal imaging for visual prostheses
Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli
2014-01-01
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710
Imaging proteins at the single-molecule level.
Longchamp, Jean-Nicolas; Rauschenbach, Stephan; Abb, Sabine; Escher, Conrad; Latychevskaia, Tatiana; Kern, Klaus; Fink, Hans-Werner
2017-02-14
Imaging single proteins has been a long-standing ambition for advancing various fields in natural science, as for instance structural biology, biophysics, and molecular nanotechnology. In particular, revealing the distinct conformations of an individual protein is of utmost importance. Here, we show the imaging of individual proteins and protein complexes by low-energy electron holography. Samples of individual proteins and protein complexes on ultraclean freestanding graphene were prepared by soft-landing electrospray ion beam deposition, which allows chemical- and conformational-specific selection and gentle deposition. Low-energy electrons do not induce radiation damage, which enables acquiring subnanometer resolution images of individual proteins (cytochrome C and BSA) as well as of protein complexes (hemoglobin), which are not the result of an averaging process.
Imaging proteins at the single-molecule level
Longchamp, Jean-Nicolas; Rauschenbach, Stephan; Abb, Sabine; Escher, Conrad; Latychevskaia, Tatiana; Kern, Klaus; Fink, Hans-Werner
2017-01-01
Imaging single proteins has been a long-standing ambition for advancing various fields in natural science, as for instance structural biology, biophysics, and molecular nanotechnology. In particular, revealing the distinct conformations of an individual protein is of utmost importance. Here, we show the imaging of individual proteins and protein complexes by low-energy electron holography. Samples of individual proteins and protein complexes on ultraclean freestanding graphene were prepared by soft-landing electrospray ion beam deposition, which allows chemical- and conformational-specific selection and gentle deposition. Low-energy electrons do not induce radiation damage, which enables acquiring subnanometer resolution images of individual proteins (cytochrome C and BSA) as well as of protein complexes (hemoglobin), which are not the result of an averaging process. PMID:28087691
Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.
Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L
2016-12-01
Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques. Copyright © 2016 Elsevier Inc. All rights reserved.
A portable low-cost long-term live-cell imaging platform for biomedical research and education.
Walzik, Maria P; Vollmar, Verena; Lachnit, Theresa; Dietz, Helmut; Haug, Susanne; Bachmann, Holger; Fath, Moritz; Aschenbrenner, Daniel; Abolpour Mofrad, Sepideh; Friedrich, Oliver; Gilbert, Daniel F
2015-02-15
Time-resolved visualization and analysis of slow dynamic processes in living cells has revolutionized many aspects of in vitro cellular studies. However, existing technology applied to time-resolved live-cell microscopy is often immobile, costly and requires a high level of skill to use and maintain. These factors limit its utility to field research and educational purposes. The recent availability of rapid prototyping technology makes it possible to quickly and easily engineer purpose-built alternatives to conventional research infrastructure which are low-cost and user-friendly. In this paper we describe the prototype of a fully automated low-cost, portable live-cell imaging system for time-resolved label-free visualization of dynamic processes in living cells. The device is light-weight (3.6 kg), small (22 × 22 × 22 cm) and extremely low-cost (<€1250). We demonstrate its potential for biomedical use by long-term imaging of recombinant HEK293 cells at varying culture conditions and validate its ability to generate time-resolved data of high quality allowing for analysis of time-dependent processes in living cells. While this work focuses on long-term imaging of mammalian cells, the presented technology could also be adapted for use with other biological specimen and provides a general example of rapidly prototyped low-cost biosensor technology for application in life sciences and education. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Active vision in satellite scene analysis
NASA Technical Reports Server (NTRS)
Naillon, Martine
1994-01-01
In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.
Tan, Charmain S.; Fuller-Tyszkiewicz, Matthew; Utpala, Ranjani; Yeung, Victoria Wai Lan; De Paoli, Tara; Loughan, Stephen; Krug, Isabel
2016-01-01
Objective: To assess differences in trait objectifying measures and eating pathology between Australian Caucasians and Asian women living in Australia and in Hong Kong with high and low levels of western cultural identification (WCI) and to see if exposure to objectifying images had an effect on state-objectification. A further aim was to assess using path analyses whether an extended version of the objectification model, including thin-ideal internalization, differed depending on the level of WCI. Method: A total of 424 participants comprising 162 Australian Caucasians and 262 Asians (n = 183 currently residing in Australia and n = 79 living in Hong Kong) took part in the study. Of the overall Asian sample, 133 individuals were classified as high-WCI and 129 participants as low-WCI. Participants were randomly allocated into one of two conditions, presenting either objectifying images of attractive and thin Asian and Caucasian female models (objectification group, n = 204), or showing neutral images of objects (e.g., chairs, tables; control group, n = 220). Subsequently, participants were asked to complete a series of questionnaires assessing objectification processes and eating pathology. Results: Findings revealed that the Caucasian group presented with significantly higher internalization and body surveillance scores than either of the two Asian groups and also revealed higher scores on trait-self-objectification than the low-WCI Asian sample. As regards to the effects of objectifying images on state self-objectification, we found that ratings were higher after exposure to women than to control objects for all groups. Finally, multi-group analyses revealed that our revised objectification model functioned equally across the Caucasian and the high-WCI Asian group, but differed between the Caucasian and the low-WCI Asian group. Conclusion: Our findings outline that individuals with varying levels of WCI might respond differently to self-objectification processes. Levels of WCI should therefore be taken into consideration when working with women from different cultural backgrounds. PMID:27790176
Menzel, Claudia; Hayn-Leichsenring, Gregor U; Langner, Oliver; Wiese, Holger; Redies, Christoph
2015-01-01
We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.
Langner, Oliver; Wiese, Holger; Redies, Christoph
2015-01-01
We investigated whether low-level processed image properties that are shared by natural scenes and artworks – but not veridical face photographs – affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess – compared to face images – a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope – in contrast to the other tested image properties – did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis. PMID:25835539
Optimization of image quality and dose for Varian aS500 electronic portal imaging devices (EPIDs).
McGarry, C K; Grattan, M W D; Cosgrove, V P
2007-12-07
This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min(-1) and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at d(max) in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min(-1) low-dose acquisition mode. Routine EPID QC contrast tolerance (+/-10) and action (+/-20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced by approximately 28%, and while the image quality has been reduced, the images produced are still clinically acceptable.
A summary of image segmentation techniques
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly
1993-01-01
Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.
2011-11-15
Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-raymore » views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8.9-fold speed-up of the processing (from 1336 to 150 s). Conclusions: Adaptive anisotropic filtering has the potential to substantially improve image quality and/or reduce the radiation dose required for obtaining 3D image data using cone beam CT.« less
Quality and noise measurements in mobile phone video capture
NASA Astrophysics Data System (ADS)
Petrescu, Doina; Pincenti, John
2011-02-01
The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.
Optic disc segmentation: level set methods and blood vessels inpainting
NASA Astrophysics Data System (ADS)
Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-03-01
Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.
Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting
Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao
2016-01-01
A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications. PMID:27827837
A novel imaging method for photonic crystal fiber fusion splicer
NASA Astrophysics Data System (ADS)
Bi, Weihong; Fu, Guangwei; Guo, Xuan
2007-01-01
Because the structure of Photonic Crystal Fiber (PCF) is very complex, and it is very difficult that traditional fiber fusion splice obtains optical axial information of PCF. Therefore, we must search for a bran-new optical imaging method to get section information of Photonic Crystal Fiber. Based on complex trait of PCF, a novel high-precision optics imaging system is presented in this article. The system uses a thinned electron-bombarded CCD (EBCCD) which is a kind of image sensor as imaging element, the thinned electron-bombarded CCD can offer low light level performance superior to conventional image intensifier coupled CCD approaches, this high-performance device can provide high contrast high resolution in low light level surveillance imaging; in order to realize precision focusing of image, we use a ultra-highprecision pace motor to adjust position of imaging lens. In this way, we can obtain legible section information of PCF. We may realize further concrete analysis for section information of PCF by digital image processing technology. Using this section information may distinguish different sorts of PCF, compute some parameters such as the size of PCF ventage, cladding structure of PCF and so on, and provide necessary analysis data for PCF fixation, adjustment, regulation, fusion and cutting system.
Image annotation based on positive-negative instances learning
NASA Astrophysics Data System (ADS)
Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.
Autonomous vision networking: miniature wireless sensor networks with imaging technology
NASA Astrophysics Data System (ADS)
Messinger, Gioia; Goldberg, Giora
2006-09-01
The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor. Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.
Scaling of Counter-Current Imbibition Process in Low-Permeability Porous Media, TR-121
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kvoscek, A.R.; Zhou, D.; Jia, L.
2001-01-17
This project presents the recent work on imaging imbibition in low permeability porous media (diatomite) with X-ray completed tomography. The viscosity ratio between nonwetting and wetting fluids is varied over several orders of magnitude yielding different levels of imbibition performance. Also performed is mathematical analysis of counter-current imbibition processes and development of a modified scaling group incorporating the mobility ratio. This modified group is physically based and appears to improve scaling accuracy of countercurrent imbibition significantly.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
Meletta, Romana; Müller Herde, Adrienne; Chiotellis, Aristeidis; Isa, Malsor; Rancic, Zoran; Borel, Nicole; Ametamey, Simon M; Krämer, Stefanie D; Schibli, Roger
2015-01-27
Research towards the non-invasive imaging of atherosclerotic plaques is of high clinical priority as early recognition of vulnerable plaques may reduce the incidence of cardiovascular events. The fibroblast activation protein alpha (FAP) was recently proposed as inflammation-induced protease involved in the process of plaque vulnerability. In this study, FAP mRNA and protein levels were investigated by quantitative polymerase chain reaction and immunohistochemistry, respectively, in human endarterectomized carotid plaques. A published boronic-acid based FAP inhibitor, MIP-1232, was synthetized and radiolabeled with iodine-125. The potential of this radiotracer to image plaques was evaluated by in vitro autoradiography with human carotid plaques. Specificity was assessed with a xenograft with high and one with low FAP level, grown in mice. Target expression analyses revealed a moderately higher protein level in atherosclerotic plaques than normal arteries correlating with plaque vulnerability. No difference in expression was determined on mRNA level. The radiotracer was successfully produced and accumulated strongly in the FAP-positive SK-Mel-187 melanoma xenograft in vitro while accumulation was negligible in an NCI-H69 xenograft with low FAP levels. Binding of the tracer to endarterectomized tissue was similar in plaques and normal arteries, hampering its use for atherosclerosis imaging.
Adaptive nonlinear L2 and L3 filters for speckled image processing
NASA Astrophysics Data System (ADS)
Lukin, Vladimir V.; Melnik, Vladimir P.; Chemerovsky, Victor I.; Astola, Jaakko T.
1997-04-01
Here we propose adaptive nonlinear filters based on calculation and analysis of two or three order statistics in a scanning window. They are designed for processing images corrupted by severe speckle noise with non-symmetrical. (Rayleigh or one-side exponential) distribution laws; impulsive noise can be also present. The proposed filtering algorithms provide trade-off between impulsive noise can be also present. The proposed filtering algorithms provide trade-off between efficient speckle noise suppression, robustness, good edge/detail preservation, low computational complexity, preservation of average level for homogeneous regions of images. Quantitative evaluations of the characteristics of the proposed filter are presented as well as the results of the application to real synthetic aperture radar and ultrasound medical images.
A ganglion-cell-based primary image representation method and its contribution to object recognition
NASA Astrophysics Data System (ADS)
Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song
2016-10-01
A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.
Towards dosimetry for photodynamic diagnosis with the low-level dose of photosensitizer.
Buzalewicz, Igor; Hołowacz, Iwona; Ulatowska-Jarża, Agnieszka; Podbielska, Halina
2017-08-01
Contemporary medicine does not concern the issue of dosimetry in photodynamic diagnosis (PDD) but follows the photosensitizer (PS) producers recommendation. Most preclinical and clinical PDD studies indicate a considerable variation in the possibility of visualization and treatment, as e.g. in case of cervix lesions. Although some of these variations can be caused by the different histological subtypes or various tumor geometries, the issue of varying PS concentration in the tumor tissue volume is definitely an important factor. Therefore, there is a need to establish the objective and systematic PDD dosimetry protocol regarding doses of light and photosensitizers. Four different irradiation sources investigated in PDD (literature) were used for PS excitation. The PS luminescence was examined by means of the non-imaging (spectroscopic) and imaging (wide- and narrow-field of view) techniques. The methodology for low-level intensity photoluminescence (PL) characterization and dedicated image processing algorithm for PS luminescence images analysis were proposed. Further, HeLa cells' cultures penetration by PS was studied by a confocal microscopy. Reducing the PS dose with the choice of proper photoexcitation conditions decreases the PDD procedure costs and the side effects, not affecting the diagnostic efficiency. We determined in vitro the minimum incubation time and photosensitizer concentration of Photolon for diagnostic purposes, for which the Photolon PL can still be observed. It was demonstrated that quantification of PS concentration, choice of proper photoexcitation source, appropriate adjustment of light dose and PS penetration of cancer cells may improve the low-level luminescence photodynamic diagnostics performance. Practical effectiveness of the PDD strongly depends on irradiation source parameters (bandwidth, maximum intensity, half-width) and their optimization is the main conditioning factor for low-level intensity and low-cost PDD. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shaked, Natan T.; Girshovitz, Pinhas; Frenklach, Irena
2014-06-01
We present our recent advances in the development of compact, highly portable and inexpensive wide-field interferometric modules. By a smart design of the interferometric system, including the usage of low-coherence illumination sources and common-path off-axis geometry of the interferometers, spatial and temporal noise levels of the resulting quantitative thickness profile can be sub-nanometric, while processing the phase profile in real time. In addition, due to novel experimentally-implemented multiplexing methods, we can capture low-coherence off-axis interferograms with significantly extended field of view and in faster acquisition rates. Using these techniques, we quantitatively imaged rapid dynamics of live biological cells including sperm cells and unicellular microorganisms. Then, we demonstrated dynamic profiling during lithography processes of microscopic elements, with thicknesses that may vary from several nanometers to hundreds of microns. Finally, we present new algorithms for fast reconstruction (including digital phase unwrapping) of off-axis interferograms, which allow real-time processing in more than video rate on regular single-core computers.
Reconstruction of pulse noisy images via stochastic resonance
Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan
2015-01-01
We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911
A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-01-01
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255
A multi-resolution approach for an automated fusion of different low-cost 3D sensors.
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-04-24
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.
Dowse, Ros; Ramela, Thato; Barford, Kirsty-Lee; Browne, Sara
2010-09-01
The side effects of antiretroviral (ARV) therapy are linked to altered quality of life and adherence. Poor adherence has also been associated with low health-literacy skills, with an uninformed patient more likely to make ARV-related decisions that compromise the efficacy of the treatment. Low literacy skills disempower patients in interactions with healthcare providers and preclude the use of existing written patient information materials, which are generally written at a high reading level. Visual images or pictograms used as a counselling tool or included in patient information leaflets have been shown to improve patients' knowledge, particularly in low-literate groups. The objective of this study was to design visuals or pictograms illustrating various ARV side effects and to evaluate them in a low-literate South African Xhosa population. Core images were generated either from a design workshop or from posed photos or images from textbooks. The research team worked closely with a graphic artist. Initial versions of the images were discussed and assessed in group discussions, and then modified and eventually evaluated quantitatively in individual interviews with 40 participants who each had a maximum of 10 years of schooling. The familiarity of the human body, its facial expressions, postures and actions contextualised the information and contributed to the participants' understanding. Visuals that were simple, had a clear central focus and reflected familiar body experiences (e.g. vomiting) were highly successful. The introduction of abstract elements (e.g. fever) and metaphorical images (e.g. nightmares) presented problems for interpretation, particularly to those with the lowest educational levels. We recommend that such visual images should be designed in collaboration with the target population and a graphic artist, taking cognisance of the audience's literacy skills and culture, and should employ a multistage iterative process of modification and evaluation.
Near Real-Time Image Reconstruction
NASA Astrophysics Data System (ADS)
Denker, C.; Yang, G.; Wang, H.
2001-08-01
In recent years, post-facto image-processing algorithms have been developed to achieve diffraction-limited observations of the solar surface. We present a combination of frame selection, speckle-masking imaging, and parallel computing which provides real-time, diffraction-limited, 256×256 pixel images at a 1-minute cadence. Our approach to achieve diffraction limited observations is complementary to adaptive optics (AO). At the moment, AO is limited by the fact that it corrects wavefront abberations only for a field of view comparable to the isoplanatic patch. This limitation does not apply to speckle-masking imaging. However, speckle-masking imaging relies on short-exposure images which limits its spectroscopic applications. The parallel processing of the data is performed on a Beowulf-class computer which utilizes off-the-shelf, mass-market technologies to provide high computational performance for scientific calculations and applications at low cost. Beowulf computers have a great potential, not only for image reconstruction, but for any kind of complex data reduction. Immediate access to high-level data products and direct visualization of dynamic processes on the Sun are two of the advantages to be gained.
Two-level image authentication by two-step phase-shifting interferometry and compressive sensing
NASA Astrophysics Data System (ADS)
Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-01-01
A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.
Salt-and-pepper noise removal using modified mean filter and total variation minimization
NASA Astrophysics Data System (ADS)
Aghajarian, Mickael; McInroy, John E.; Wright, Cameron H. G.
2018-01-01
The search for effective noise removal algorithms is still a real challenge in the field of image processing. An efficient image denoising method is proposed for images that are corrupted by salt-and-pepper noise. Salt-and-pepper noise takes either the minimum or maximum intensity, so the proposed method restores the image by processing the pixels whose values are either 0 or 255 (assuming an 8-bit/pixel image). For low levels of noise corruption (less than or equal to 50% noise density), the method employs the modified mean filter (MMF), while for heavy noise corruption, noisy pixels values are replaced by the weighted average of the MMF and the total variation of corrupted pixels, which is minimized using convex optimization. Two fuzzy systems are used to determine the weights for taking average. To evaluate the performance of the algorithm, several test images with different noise levels are restored, and the results are quantitatively measured by peak signal-to-noise ratio and mean absolute error. The results show that the proposed scheme gives considerable noise suppression up to a noise density of 90%, while almost completely maintaining edges and fine details of the original image.
Seeing the invisible: The scope and limits of unconscious processing in binocular rivalry
Lin, Zhicheng; He, Sheng
2009-01-01
When an image is presented to one eye and a very different image is presented to the corresponding location of the other eye, they compete for perceptual dominance, such that only one image is visible at a time while the other is suppressed. Called binocular rivalry, this phenomenon and its deviants have been extensively exploited to study the mechanism and neural correlates of consciousness. In this paper, we propose a framework, the unconscious binding hypothesis, to distinguish unconscious and conscious processing. According to this framework, the unconscious mind not only encodes individual features but also temporally binds distributed features to give rise to cortical representation, but unlike conscious binding, such unconscious binding is fragile. Under this framework, we review evidence from psychophysical and neuroimaging studies, which suggests that: (1) for invisible low level features, prolonged exposure to visual pattern and simple translational motion can alter the appearance of subsequent visible features (i.e. adaptation); for invisible high level features, although complex spiral motion cannot produce adaptation, nor can objects/words enhance subsequent processing of related stimuli (i.e. priming), images of objects such as tools can nevertheless activate the dorsal pathway; and (2) although invisible central cues cannot orient attention, invisible erotic pictures in the periphery can nevertheless guide attention, likely through emotional arousal; reciprocally, the processing of invisible information can be modulated by attention at perceptual and neural levels. PMID:18824061
Real-time FPGA architectures for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2000-03-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.
Basha, Dudekula Althaf; Rosalie, Julian M; Somekawa, Hidetoshi; Miyawaki, Takashi; Singh, Alok; Tsuchiya, Koichi
2016-01-01
Microstructural investigation of extremely strained samples, such as severely plastically deformed (SPD) materials, by using conventional transmission electron microscopy techniques is very challenging due to strong image contrast resulting from the high defect density. In this study, low angle annular dark field (LAADF) imaging mode of scanning transmission electron microscope (STEM) has been applied to study the microstructure of a Mg-3Zn-0.5Y (at%) alloy processed by high pressure torsion (HPT). LAADF imaging advantages for observation of twinning, grain fragmentation, nucleation of recrystallized grains and precipitation on second phase particles in the alloy processed by HPT are highlighted. By using STEM-LAADF imaging with a range of incident angles, various microstructural features have been imaged, such as nanoscale subgrain structure and recrystallization nucleation even from the thicker region of the highly strained matrix. It is shown that nucleation of recrystallized grains starts at a strain level of revolution [Formula: see text] (earlier than detected by conventional bright field imaging). Occurrence of recrystallization of grains by nucleating heterogeneously on quasicrystalline particles is also confirmed. Minimizing all strain effects by LAADF imaging facilitated grain size measurement of [Formula: see text] nm in fully recrystallized HPT specimen after [Formula: see text].
Vicente, Justo Serrano; Gómez, Alejandro Lorente; Moreno, Rafael Lorente; Torre, Jose Rafael Infante; Bernardo, Lucía García; Madrid, Juan Ignacio Rayo
2018-01-01
Gout is a common metabolic disorder, typically diagnosed in peripheral joints. Tophaceous deposits in lumbar spine are a very rare condition with very few cases reported in literature. The following is a case report of a 52-year-old patient with low back pain, left leg pain, and numbness. Serum uric acid level was in normal range. magnetic resonance imaging, bone scan, and gallium-67 images suggested an inflammatory-infectious process focus at L4. After a decompressive laminectomy at L4–L5 level, histological examination showed a chalky material with extensive deposition of amorphous gouty material surrounded by macrophages and foreign-body giant cells (tophaceous deposits). PMID:29643682
Vicente, Justo Serrano; Gómez, Alejandro Lorente; Moreno, Rafael Lorente; Torre, Jose Rafael Infante; Bernardo, Lucía García; Madrid, Juan Ignacio Rayo
2018-01-01
Gout is a common metabolic disorder, typically diagnosed in peripheral joints. Tophaceous deposits in lumbar spine are a very rare condition with very few cases reported in literature. The following is a case report of a 52-year-old patient with low back pain, left leg pain, and numbness. Serum uric acid level was in normal range. magnetic resonance imaging, bone scan, and gallium-67 images suggested an inflammatory-infectious process focus at L4. After a decompressive laminectomy at L4-L5 level, histological examination showed a chalky material with extensive deposition of amorphous gouty material surrounded by macrophages and foreign-body giant cells (tophaceous deposits).
Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.
Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).
Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology
Wu, Shibin; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Girard, Pascal; Koenig-Robert, Roger
2011-01-01
Background Comparative studies of cognitive processes find similarities between humans and apes but also monkeys. Even high-level processes, like the ability to categorize classes of object from any natural scene under ultra-rapid time constraints, seem to be present in rhesus macaque monkeys (despite a smaller brain and the lack of language and a cultural background). An interesting and still open question concerns the degree to which the same images are treated with the same efficacy by humans and monkeys when a low level cue, the spatial frequency content, is controlled. Methodology/Principal Findings We used a set of natural images equalized in Fourier spectrum and asked whether it is still possible to categorize them as containing an animal and at what speed. One rhesus macaque monkey performed a forced-choice saccadic task with a good accuracy (67.5% and 76% for new and familiar images respectively) although performance was lower than with non-equalized images. Importantly, the minimum reaction time was still very fast (100 ms). We compared the performances of human subjects with the same setup and the same set of (new) images. Overall mean performance of humans was also lower than with original images (64% correct) but the minimum reaction time was still short (140 ms). Conclusion Performances on individual images (% correct but not reaction times) for both humans and the monkey were significantly correlated suggesting that both species use similar features to perform the task. A similar advantage for full-face images was seen for both species. The results also suggest that local low spatial frequency information could be important, a finding that fits the theory that fast categorization relies on a rapid feedforward magnocellular signal. PMID:21326600
A high-level 3D visualization API for Java and ImageJ.
Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin
2010-05-21
Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2001-10-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.
NASA Technical Reports Server (NTRS)
Wang, Yansen; Tao, W.-K.; Lau, K.-M.; Wetzel, Peter J.
2004-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated by coupling a mesoscale atmospheric model (MM5) and a detailed, land surface model, PLACE (the Parameterization for Land-Atmosphere-Cloud Exchange). The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The control simulation with the PLACE land surface model and variable sea surface temperature captured the basic signatures of the monsoon onset processes and associated rainfall statistics. Sensitivity tests indicated that simulations were sigmficantly improved by including the PLACE land surface model. The mechanism by which the land surface processes affect the moisture transport and the convection during the onset of the southeast Asian monsoon were analyzed. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation: the southwest low-level flow over the Indo-china peninsula and the northern, cold frontal intrusion from southern China. The surface sensible and latent heat fluxes modified the low-level temperature distribution and gradient, and therefore the low-level wind due to the thermal wind effect. The more realistic forcing of the sensible and latent heat fluxes from the detailed, land surface model improved the low-level wind simulation apd associated moisture transport and convection.
NASA Technical Reports Server (NTRS)
Wang, Yansen; Tao, W.-K.; Lau, K.-M.; Wetzel, Peter J.
2004-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated by coupling a mesoscale atmospheric model (MM5) and a detailed, land surface model, PLACE (the Parameterization for Land-Atmosphere-Cloud Exchange). The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The control simulation with the PLACE land surface model and variable sea surface temperature captured the basic signatures of the monsoon onset processes and associated rainfall statistics. Sensitivity tests indicated that simulations were significantly improved by including the PLACE land surface model. The mechanism by which the land surface processes affect the moisture transport and the convection during the onset of the southeast Asian monsoon were analyzed. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation: the southwest low-level flow over the Indo-China peninsula and the northern, cold frontal intrusion from southern China. The surface sensible and latent heat fluxes modified the low-level temperature distribution and merit, and therefore the low-level wind due to the thermal wind effect. The more realistic forcing of the sensible and latent heat fluxes from the detailed, land surface model improved the low-level wind simulation and associated moisture transport and convection.
Finding regions of interest in pathological images: an attentional model approach
NASA Astrophysics Data System (ADS)
Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo
2009-02-01
This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.
Optic disc segmentation for glaucoma screening system using fundus images.
Almazroa, Ahmed; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-01-01
Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head pathologies such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of optic nerve head abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique was applied. As well an important contribution was to involve the variations in opinions among the ophthalmologists in detecting the disc boundaries and diagnosing the glaucoma. Most of the previous studies were trained and tested based on only one opinion, which can be assumed to be biased for the ophthalmologist. In addition, the accuracy was calculated based on the number of images that coincided with the ophthalmologists' agreed-upon images, and not only on the overlapping images as in previous studies. The ultimate goal of this project is to develop an automated image processing system for glaucoma screening. The disc algorithm is evaluated using a new retinal fundus image dataset called RIGA (retinal images for glaucoma analysis). In the case of low-quality images, a double level set was applied, in which the first level set was considered to be localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as the agreement among the manual markings of six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid was 83.9%, and the best agreement was observed between the results of the algorithm and manual markings in 379 images.
Method for extracting long-equivalent wavelength interferometric information
NASA Technical Reports Server (NTRS)
Hochberg, Eric B. (Inventor)
1991-01-01
A process for extracting long-equivalent wavelength interferometric information from a two-wavelength polychromatic or achromatic interferometer. The process comprises the steps of simultaneously recording a non-linear sum of two different frequency visible light interferograms on a high resolution film and then placing the developed film in an optical train for Fourier transformation, low pass spatial filtering and inverse transformation of the film image to produce low spatial frequency fringes corresponding to a long-equivalent wavelength interferogram. The recorded non-linear sum irradiance derived from the two-wavelength interferometer is obtained by controlling the exposure so that the average interferogram irradiance is set at either the noise level threshold or the saturation level threshold of the film.
Fully Convolutional Architecture for Low-Dose CT Image Noise Reduction
NASA Astrophysics Data System (ADS)
Badretale, S.; Shaker, F.; Babyn, P.; Alirezaie, J.
2017-10-01
One of the critical topics in medical low-dose Computed Tomography (CT) imaging is how best to maintain image quality. As the quality of images decreases with lowering the X-ray radiation dose, improving image quality is extremely important and challenging. We have proposed a novel approach to denoise low-dose CT images. Our algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. Our method is based on a deep convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image the proposed algorithm is capable of creating a high-quality denoised image. We demonstrate the superiority of our technique by comparing the results with two other state-of-the-art methods in terms of the peak signal to noise ratio, root mean square error, and a structural similarity index.
Texture Feature Analysis for Different Resolution Level of Kidney Ultrasound Images
NASA Astrophysics Data System (ADS)
Kairuddin, Wan Nur Hafsha Wan; Mahmud, Wan Mahani Hafizah Wan
2017-08-01
Image feature extraction is a technique to identify the characteristic of the image. The objective of this work is to discover the texture features that best describe a tissue characteristic of a healthy kidney from ultrasound (US) image. Three ultrasound machines that have different specifications are used in order to get a different quality (different resolution) of the image. Initially, the acquired images are pre-processed to de-noise the speckle to ensure the image preserve the pixels in a region of interest (ROI) for further extraction. Gaussian Low- pass Filter is chosen as the filtering method in this work. 150 of enhanced images then are segmented by creating a foreground and background of image where the mask is created to eliminate some unwanted intensity values. Statistical based texture features method is used namely Intensity Histogram (IH), Gray-Level Co-Occurance Matrix (GLCM) and Gray-level run-length matrix (GLRLM).This method is depends on the spatial distribution of intensity values or gray levels in the kidney region. By using One-Way ANOVA in SPSS, the result indicated that three features (Contrast, Difference Variance and Inverse Difference Moment Normalized) from GLCM are not statistically significant; this concludes that these three features describe a healthy kidney characteristics regardless of the ultrasound image quality.
When the brain does not adequately feel the body: Links between low resilience and interoception.
Haase, Lori; Stewart, Jennifer L; Youssef, Brittany; May, April C; Isakovic, Sara; Simmons, Alan N; Johnson, Douglas C; Potterat, Eric G; Paulus, Martin P
2016-01-01
This study examined neural processes of resilience during aversive interoceptive processing. Forty-six individuals were divided into three groups of resilience Low (LowRes), high (HighRes), and normal (NormRes), based on the Connor-Davidson Resilience Scale (2003). Participants then completed a task involving anticipation and experience of loaded breathing during functional magnetic resonance imaging (fMRI) recording. Compared to HighRes and NormRes groups, LowRes self-reported lower levels of interoceptive awareness and demonstrated higher insular and thalamic activation across anticipation and breathing load conditions. Thus, individuals with lower resilience show reduced attention to bodily signals but greater neural processing to aversive bodily perturbations. In low resilient individuals, this mismatch between attention to and processing of interoceptive afferents may result in poor adaptation in stressful situations. Copyright © 2015 Elsevier B.V. All rights reserved.
Rotation covariant image processing for biomedical applications.
Skibbe, Henrik; Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.
Using compressive measurement to obtain images at ultra low-light-level
NASA Astrophysics Data System (ADS)
Ke, Jun; Wei, Ping
2013-08-01
In this paper, a compressive imaging architecture is used for ultra low-light-level imaging. In such a system, features, instead of object pixels, are imaged onto a photocathode, and then magnified by an image intensifier. By doing so, system measurement SNR is increased significantly. Therefore, the new system can image objects at ultra low-ligh-level, while a conventional system has difficulty. PCA projection is used to collect feature measurements in this work. Linear Wiener operator and nonlinear method based on FoE model are used to reconstruct objects. Root mean square error (RMSE) is used to quantify system reconstruction quality.
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
Human body region enhancement method based on Kinect infrared imaging
NASA Astrophysics Data System (ADS)
Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing
2016-10-01
To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.
Internal curvature signal and noise in low- and high-level vision
Grabowecky, Marcia; Kim, Yee Joon; Suzuki, Satoru
2011-01-01
How does internal processing contribute to visual pattern perception? By modeling visual search performance, we estimated internal signal and noise relevant to perception of curvature, a basic feature important for encoding of three-dimensional surfaces and objects. We used isolated, sparse, crowded, and face contexts to determine how internal curvature signal and noise depended on image crowding, lateral feature interactions, and level of pattern processing. Observers reported the curvature of a briefly flashed segment, which was presented alone (without lateral interaction) or among multiple straight segments (with lateral interaction). Each segment was presented with no context (engaging low-to-intermediate-level curvature processing), embedded within a face context as the mouth (engaging high-level face processing), or embedded within an inverted-scrambled-face context as a control for crowding. Using a simple, biologically plausible model of curvature perception, we estimated internal curvature signal and noise as the mean and standard deviation, respectively, of the Gaussian-distributed population activity of local curvature-tuned channels that best simulated behavioral curvature responses. Internal noise was increased by crowding but not by face context (irrespective of lateral interactions), suggesting prevention of noise accumulation in high-level pattern processing. In contrast, internal curvature signal was unaffected by crowding but modulated by lateral interactions. Lateral interactions (with straight segments) increased curvature signal when no contextual elements were added, but equivalent interactions reduced curvature signal when each segment was presented within a face. These opposing effects of lateral interactions are consistent with the phenomena of local-feature contrast in low-level processing and global-feature averaging in high-level processing. PMID:21209356
Deschrijver, Eliane; Wiersema, Jan R; Brass, Marcel
2017-04-01
For more than 15 years, motor interference paradigms have been used to investigate the influence of action observation on action execution. Most research on so-called automatic imitation has focused on variables that play a modulating role or investigated potential confounding factors. Interestingly, furthermore, a number of functional magnetic resonance imaging (fMRI) studies have tried to shed light on the functional mechanisms and neural correlates involved in imitation inhibition. However, these fMRI studies, presumably due to poor temporal resolution, have primarily focused on high-level processes and have neglected the potential role of low-level motor and perceptual processes. In the current EEG study, we therefore aimed to disentangle the influence of low-level perceptual and motoric mechanisms from high-level cognitive mechanisms. We focused on potential congruency differences in the visual N190 - a component related to the processing of biological motion, the Readiness Potential - a component related to motor preparation, and the high-level P3 component. Interestingly, we detected congruency effects in each of these components, suggesting that the interference effect in an automatic imitation paradigm is not only related to high-level processes such as self-other distinction but also to more low-level influences of perception on action and action on perception. Moreover, we documented relationships of the neural effects with (autistic) behavior.
Mechanical and Electrical Characterization of Novel Carbon Nano Fiber Ultralow Density Foam
2013-12-01
permitted SEM images to be acquired of CFF samples under specified specific levels of stress...through the entire process I would not have been able to complete the amount of work that I had set out to accomplish, nor would I have recognized...identified the need for a material that could detect diverse levels of both, static and dynamic loads, with enough sensitivity but meant for low levels of
USC orthogonal multiprocessor for image processing with neural networks
NASA Astrophysics Data System (ADS)
Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid
1990-07-01
This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.
A research on radiation calibration of high dynamic range based on the dual channel CMOS
NASA Astrophysics Data System (ADS)
Ma, Kai; Shi, Zhan; Pan, Xiaodong; Wang, Yongsheng; Wang, Jianghua
2017-10-01
The dual channel complementary metal-oxide semiconductor (CMOS) can get high dynamic range (HDR) image through extending the gray level of the image by using image fusion with high gain channel image and low gain channel image in a same frame. In the process of image fusion with dual channel, it adopts the coefficients of radiation response of a pixel from dual channel in a same frame, and then calculates the gray level of the pixel in the HDR image. For the coefficients of radiation response play a crucial role in image fusion, it has to find an effective method to acquire these parameters. In this article, it makes a research on radiation calibration of high dynamic range based on the dual channel CMOS, and designs an experiment to calibrate the coefficients of radiation response for the sensor it used. In the end, it applies these response parameters in the dual channel CMOS which calibrates, and verifies the correctness and feasibility of the method mentioned in this paper.
High-performance electronic image stabilisation for shift and rotation correction
NASA Astrophysics Data System (ADS)
Parker, Steve C. J.; Hickman, D. L.; Wu, F.
2014-06-01
A novel low size, weight and power (SWaP) video stabiliser called HALO™ is presented that uses a SoC to combine the high processing bandwidth of an FPGA, with the signal processing flexibility of a CPU. An image based architecture is presented that can adapt the tiling of frames to cope with changing scene dynamics. A real-time implementation is then discussed that can generate several hundred optical flow vectors per video frame, to accurately calculate the unwanted rigid body translation and rotation of camera shake. The performance of the HALO™ stabiliser is comprehensively benchmarked against the respected Deshaker 3.0 off-line stabiliser plugin to VirtualDub. Eight different videos are used for benchmarking, simulating: battlefield, surveillance, security and low-level flight applications in both visible and IR wavebands. The results show that HALO™ rivals the performance of Deshaker within its operating envelope. Furthermore, HALO™ may be easily reconfigured to adapt to changing operating conditions or requirements; and can be used to host other video processing functionality like image distortion correction, fusion and contrast enhancement.
The effects of presence and influence in nature images in a simulated hospital patient room.
Vincent, Ellen; Battisto, Dina; Grimes, Larry
2010-01-01
Nature images are frequently used for therapeutic purposes in hospital settings. Nature images may distract people from pain and promote psychological and physiological well-being, yet limited research is available to guide the selection process of nature images. The hypothesis is that higher degrees of presence and/or influence in the still photograph make it more effective at holding the viewer's attention, which therefore may distract the viewer from pain, and therefore be considered therapeutic. Research questions include: (1) Is there a significant difference in the level of perceived presence among the selected images? (2) Is there a significant difference in the level of perceived influence among the selected images? (3) Is there a correlation between levels of presence and levels of influence? 109 college students were randomly assigned to one of four different image categories defined by Appleton's prospect refuge theory of landscape preference. Categories included prospect, refuge, hazard, and mixed prospect and refuge. A control group was also included. Each investigation was divided into five periods: prereporting, rest, a pain stressor (hand in ice water for up to 120 seconds), recovery, and postreporting. Physiological readings (vital signs) were measured repeatedly using a Dinamap automatic vital sign tracking machine. Psychological responses (mood) to the image were collected using a reliable instrument, the Profile of Mood States. No significant statistical difference in levels of presence was found among the four image categories. However, levels of influence differed and the hazard nature image category had significantly higher influence ratings and lower diastolic blood pressure readings during the pain treatment. A correlation (r = .62) between presence and influence was identified; as one rose, so did the other. Mood state was significantly low for the hazard nature image after the pain stressor experience. Though the hazard image caused distraction from pain, it is nontherapeutic because of the low mood ratings it received. These preliminary findings contribute methodology to the research field and stimulate interest for additional research into the visual effects of nature images on pain.
Scene and human face recognition in the central vision of patients with glaucoma
Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole
2018-01-01
Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572
NASA Astrophysics Data System (ADS)
Jantzen, Connie; Slagle, Rick
1997-05-01
The distinction between exposure time and sample rate is often the first point raised in any discussion of high speed imaging. Many high speed events require exposure times considerably shorter than those that can be achieved solely by the sample rate of the camera, where exposure time equals 1/sample rate. Gating, a method of achieving short exposure times in digital cameras, is often difficult to achieve for exposure time requirements shorter than 100 microseconds. This paper discusses the advantages and limitations of using the short duration light pulse of a near infrared laser with high speed digital imaging systems. By closely matching the output wavelength of the pulsed laser to the peak near infrared response of current sensors, high speed image capture can be accomplished at very low (visible) light levels of illumination. By virtue of the short duration light pulse, adjustable to as short as two microseconds, image capture of very high speed events can be achieved at relatively low sample rates of less than 100 pictures per second, without image blur. For our initial investigations, we chose a ballistic subject. The results of early experimentation revealed the limitations of applying traditional ballistic imaging methods when using a pulsed infrared lightsource with a digital imaging system. These early disappointing results clarified the need to further identify the unique system characteristics of the digital imager and pulsed infrared combination. It was also necessary to investigate how the infrared reflectance and transmittance of common materials affects the imaging process. This experimental work yielded a surprising, successful methodology which will prove useful in imaging ballistic and weapons tests, as well as forensics, flow visualizations, spray pattern analyses, and nocturnal animal behavioral studies.
A computer-generated animated face stimulus set for psychophysiological research
Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James
2014-01-01
Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164
Minimizing the semantic gap in biomedical content-based image retrieval
NASA Astrophysics Data System (ADS)
Guan, Haiying; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2010-03-01
A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.
A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.
1989-01-01
Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
NASA Astrophysics Data System (ADS)
Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B.
2012-04-01
While compressed sensing (CS)-based algorithms have been developed for the low-dose cone beam CT (CBCT) reconstruction, a clear understanding of the relationship between the image quality and imaging dose at low-dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of the number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding of the tradeoff between the image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image-guided radiation therapy (IGRT). Main findings of this work include (1) under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40-100 total mAs, depending on the specific IGRT applications. (2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with the projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. (3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. (4) The clinically acceptable lowest imaging dose level is task dependent. In our study, 72.8 mAs is a safe dose level for visualizing low-contrast objects, while 12.2 total mAs is sufficient for detecting high-contrast objects of diameter greater than 3 mm.
A selective deficit in the appreciation and recognition of brightness: brightness agnosia?
Nijboer, Tanja C W; Nys, Gudrun M S; van der Smagt, Maarten J; de Haan, Edward H F
2009-01-01
We report a patient with extensive brain damage in the right hemisphere who demonstrated a severe impairment in the appreciation of brightness. Acuity, contrast sensitivity as well as luminance discrimination were normal, suggesting her brightness impairment is not a mere consequence of low-level sensory impairments. The patient was not able to indicate the darker or the lighter of two grey squares, even though she was able to see that they differed. In addition, she could not indicate whether the lights in a room were switched on or off, nor was she able to differentiate between normal greyscale images and inverted greyscale images. As the patient recognised objects, colours, and shapes correctly, the impairment is specific for brightness. As low-level, sensory processing is normal, this specific deficit in the recognition and appreciation of brightness appears to be of a higher, cognitive level, the level of semantic knowledge. This appears to be the first report of 'brightness agnosia'.
NASA Astrophysics Data System (ADS)
Schuck, Miller Harry
Automotive head-up displays require compact, bright, and inexpensive imaging systems. In this thesis, a compact head-up display (HUD) utilizing liquid-crystal-on-silicon microdisplay technology is presented from concept to implementation. The thesis comprises three primary areas of HUD research: the specification, design and implementation of a compact HUD optical system, the development of a wafer planarization process to enhance reflective device brightness and light immunity and the design, fabrication and testing of an inexpensive 640 x 512 pixel active matrix backplane intended to meet the HUD requirements. The thesis addresses the HUD problem at three levels, the systems level, the device level, and the materials level. At the systems level, the optical design of an automotive HUD must meet several competing requirements, including high image brightness, compact packaging, video-rate performance, and low cost. An optical system design which meets the competing requirements has been developed utilizing a fully-reconfigurable reflective microdisplay. The design consists of two optical stages, the first a projector stage which magnifies the display, and a second stage which forms the virtual image eventually seen by the driver. A key component of the optical system is a diffraction grating/field lens which forms a large viewing eyebox while reducing the optical system complexity. Image quality biocular disparity and luminous efficacy were analyzed and results of the optical implementation are presented. At the device level, the automotive HUD requires a reconfigurable, video-rate, high resolution image source for applications such as navigation and night vision. The design of a 640 x 512 pixel active matrix backplane which meets the requirements of the HUD is described. The backplane was designed to produce digital field sequential color images at video rates utilizing fast switching liquid crystal as the modulation layer. The design methodology is discussed, and the example of a clock generator is described from design to implementation. Electrical and optical test results of the fabricated backplane are presented. At the materials level, a planarization method was developed to meet the stringent brightness requirements of automotive HUD's. The research efforts described here have resulted in a simple, low cost post-processing method for planarizing microdisplay substrates based on a spin-cast polymeric resin, benzocyclobutene (BCB). Six- fold reductions in substrate step height were accomplished with a single coating. Via masking and dry etching methods were developed. High reflectivity metal was deposited and patterned over the planarized substrate to produce high aperture pixel mirrors. The process is simple, rapid, and results in microdisplays better able to meet the stringent requirements of high brightness display systems. Methods and results of the post- processing are described.
A low-cost vector processor boosting compute-intensive image processing operations
NASA Technical Reports Server (NTRS)
Adorf, Hans-Martin
1992-01-01
Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.
Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi
2014-01-01
We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.
SU-E-I-01: Iterative CBCT Reconstruction with a Feature-Preserving Penalty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyu, Q; Li, B; Southern Medical University, Guangzhou
2015-06-15
Purpose: Low-dose CBCT is desired in various clinical applications. Iterative image reconstruction algorithms have shown advantages in suppressing noise in low-dose CBCT. However, due to the smoothness constraint enforced during the reconstruction process, edges may be blurred and image features may lose in the reconstructed image. In this work, we proposed a new penalty design to preserve image features in the image reconstructed by iterative algorithms. Methods: Low-dose CBCT is reconstructed by minimizing the penalized weighted least-squares (PWLS) objective function. Binary Robust Independent Elementary Features (BRIEF) of the image were integrated into the penalty of PWLS. BRIEF is a generalmore » purpose point descriptor that can be used to identify important features of an image. In this work, BRIEF distance of two neighboring pixels was used to weigh the smoothing parameter in PWLS. For pixels of large BRIEF distance, weaker smooth constraint will be enforced. Image features will be better preserved through such a design. The performance of the PWLS algorithm with BRIEF penalty was evaluated by a CatPhan 600 phantom. Results: The image quality reconstructed by the proposed PWLS-BRIEF algorithm is superior to that by the conventional PWLS method and the standard FDK method. At matched noise level, edges in PWLS-BRIEF reconstructed image are better preserved. Conclusion: This study demonstrated that the proposed PWLS-BRIEF algorithm has great potential on preserving image features in low-dose CBCT.« less
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2003-08-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.
Image Processing Occupancy Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
NASA Astrophysics Data System (ADS)
Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias
2012-06-01
Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.
2014-08-26
Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.
NASA Astrophysics Data System (ADS)
Mazza, F.; Da Silva, M. P.; Le Callet, P.; Heynderickx, I. E. J.
2015-03-01
Multimedia quality assessment has been an important research topic during the last decades. The original focus on artifact visibility has been extended during the years to aspects as image aesthetics, interestingness and memorability. More recently, Fedorovskaya proposed the concept of 'image psychology': this concept focuses on additional quality dimensions related to human content processing. While these additional dimensions are very valuable in understanding preferences, it is very hard to define, isolate and measure their effect on quality. In this paper we continue our research on face pictures investigating which image factors influence context perception. We collected perceived fit of a set of images to various content categories. These categories were selected based on current typologies in social networks. Logistic regression was adopted to model category fit based on images features. In this model we used both low level and high level features, the latter focusing on complex features related to image content. In order to extract these high level features, we relied on crowdsourcing, since computer vision algorithms are not yet sufficiently accurate for the features we needed. Our results underline the importance of some high level content features, e.g. the dress of the portrayed person and scene setting, in categorizing image.
Szarka, Mate; Guttman, Andras
2017-10-17
We present the application of a smartphone anatomy based technology in the field of liquid phase bioseparations, particularly in capillary electrophoresis. A simple capillary electrophoresis system was built with LED induced fluorescence detection and a credit card sized minicomputer to prove the concept of real time fluorescent imaging (zone adjustable time-lapse fluorescence image processor) and separation controller. The system was evaluated by analyzing under- and overloaded aminopyrenetrisulfonate (APTS)-labeled oligosaccharide samples. The open source software based image processing tool allowed undistorted signal modulation (reprocessing) if the signal was inappropriate for the actual detection system settings (too low or too high). The novel smart detection tool for fluorescently labeled biomolecules greatly expands dynamic range and enables retrospective correction for injections with unsuitable signal levels without the necessity to repeat the analysis.
Raspberry Pi-powered imaging for plant phenotyping.
Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A
2018-03-01
Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.
1988-01-19
approach for the analysis of aerial images. In this approach image analysis is performed ast three levels of abstraction, namely iconic or low-level... image analysis , symbolic or medium-level image analysis , and semantic or high-level image analysis . Domain dependent knowledge about prototypical urban
Comparison of lifetime-based methods for 2D phosphor thermometry in high-temperature environment
NASA Astrophysics Data System (ADS)
Peng, Di; Liu, Yingzheng; Zhao, Xiaofeng; Kim, Kyung Chun
2016-09-01
This paper discusses the currently available techniques for 2D phosphor thermometry, and compares the performance of two lifetime-based methods: high-speed imaging and the dual-gate. High-speed imaging resolves luminescent decay with a fast frame rate, and has become a popular method for phosphor thermometry in recent years. But it has disadvantages such as high equipment cost and long data processing time, and it would fail at sufficiently high temperature due to a low signal-to-noise ratio and short lifetime. The dual-gate method only requires two images on the decay curve and therefore greatly reduces cost in hardware and processing time. A dual-gate method for phosphor thermometry has been developed and compared with the high-speed imaging method through both calibration and a jet impingement experiment. Measurement uncertainty has been evaluated for a temperature range of 473-833 K. The effects of several key factors on uncertainty have been discussed, including the luminescent signal level, the decay lifetime and temperature sensitivity. The results show that both methods are valid for 2D temperature sensing within the given range. The high-speed imaging method shows less uncertainty at low temperatures where the signal level and the lifetime are both sufficient, but its performance is degraded at higher temperatures due to a rapidly reduced signal and lifetime. For T > 750 K, the dual-gate method outperforms the high-speed imaging method thanks to its superiority in signal-to-noise ratio and temperature sensitivity. The dual-gate method has great potential for applications in high-temperature environments where the high-speed imaging method is not applicable.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Uncooled infrared imaging using bimaterial microcantilever arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grbovic, Dragoslav; Lavrik, Nickolay V; Rajic, Slobodan
2006-01-01
We report on the fabrication and characterization of microcantilever based uncooled focal plane array (FPA) for infrared imaging. By combining a streamlined design of microcantilever thermal transducers with a highly efficient optical readout, we minimized the fabrication complexity while achieving a competitive level of imaging performance. The microcantilever FPAs were fabricated using a straightforward fabrication process that involved only three photolithographic steps (i.e. three masks). A designed and constructed prototype of an IR imager employed a simple optical readout based on a noncoherent low-power light source. The main figures of merit of the IR imager were found to be comparablemore » to those of uncooled MEMS infrared detectors with substantially higher degree of fabrication complexity. In particular, the NETD and the response time of the implemented MEMS IR detector were measured to be as low as 0.5K and 6 ms, respectively. The potential of the implemented designs can also be concluded from the fact that the constructed prototype enabled IR imaging of close to room temperature objects without the use of any advanced data processing. The most unique and practically valuable feature of the implemented FPAs, however, is their scalability to high resolution formats, such as 2000x2000, without progressively growing device complexity and cost.« less
SU-F-J-54: Towards Real-Time Volumetric Imaging Using the Treatment Beam and KV Beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Rozario, T; Liu, A
Purpose: Existing real-time imaging uses dual (orthogonal) kV beam fluoroscopies and may result in significant amount of extra radiation to patients, especially for prolonged treatment cases. In addition, kV projections only provide 2D information, which is insufficient for in vivo dose reconstruction. We propose real-time volumetric imaging using prior knowledge of pre-treatment 4D images and real-time 2D transit data of treatment beam and kV beam. Methods: The pre-treatment multi-snapshot volumetric images are used to simulate 2D projections of both the treatment beam and kV beam, respectively, for each treatment field defined by the control point. During radiation delivery, the transitmore » signals acquired by the electronic portal image device (EPID) are processed for every projection and compared with pre-calculation by cross-correlation for phase matching and thus 3D snapshot identification or real-time volumetric imaging. The data processing involves taking logarithmic ratios of EPID signals with respect to the air scan to reduce modeling uncertainties in head scatter fluence and EPID response. Simulated 2D projections are also used to pre-calculate confidence levels in phase matching. Treatment beam projections that have a low confidence level either in pre-calculation or real-time acquisition will trigger kV beams so that complementary information can be exploited. In case both the treatment beam and kV beam return low confidence in phase matching, a predicted phase based on linear regression will be generated. Results: Simulation studies indicated treatment beams provide sufficient confidence in phase matching for most cases. At times of low confidence from treatment beams, kV imaging provides sufficient confidence in phase matching due to its complementary configuration. Conclusion: The proposed real-time volumetric imaging utilizes the treatment beam and triggers kV beams for complementary information when the treatment beam along does not provide sufficient confidence for phase matching. This strategy minimizes the use of extra radiation to patients. This project is partially supported by a Varian MRA grant.« less
Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1991-01-01
A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.
High resolution image processing on low-cost microcomputers
NASA Technical Reports Server (NTRS)
Miller, R. L.
1993-01-01
Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.
Motmot, an open-source toolkit for realtime video acquisition and analysis.
Straw, Andrew D; Dickinson, Michael H
2009-07-22
Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.
Low Voltage Low Light Imager and Photodetector
NASA Technical Reports Server (NTRS)
Nikzad, Shouleh (Inventor); Martin, Chris (Inventor); Hoenk, Michael E. (Inventor)
2013-01-01
Highly efficient, low energy, low light level imagers and photodetectors are provided. In particular, a novel class of Della-Doped Electron Bombarded Array (DDEBA) photodetectors that will reduce the size, mass, power, complexity, and cost of conventional imaging systems while improving performance by using a thinned imager that is capable of detecting low-energy electrons, has high gain, and is of low noise.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Plaza, Javier; Paz, Abel
2010-10-01
Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Rotation Covariant Image Processing for Biomedical Applications
Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255
NASA Astrophysics Data System (ADS)
Jain, Ameet K.; Taylor, Russell H.
2004-04-01
The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). Although US has many advantages over others, tracked US for Orthopedic Surgery has been researched by only a few authors. An important factor limiting the accuracy of tracked US to CT registration (1-3mm) has been the difficulty in determining the exact location of the bone surfaces in the US images (the response could range from 2-4mm). Thus it is crucial to localize the bone surface accurately from these images. Moreover conventional US imaging systems are known to have certain inherent inaccuracies, mainly due to the fact that the imaging model is assumed planar. This creates the need to develop a bone segmentation framework that can couple information from various post-processed spatially separated US images (of the bone) to enhance the localization of the bone surface. In this paper we discuss the various reasons that cause inherent uncertainties in the bone surface localization (in B-mode US images) and suggest methods to account for these. We also develop a method for automatic bone surface detection. To do so, we account objectively for the high-level understanding of the various bone surface features visible in typical US images. A combination of these features would finally decide the surface position. We use a Bayesian probabilistic framework, which strikes a fair balance between high level understanding from features in an image and the low level number crunching of standard image processing techniques. It also provides us with a mathematical approach that facilitates combining multiple images to augment the bone surface estimate.
Optimization of super-resolution processing using incomplete image sets in PET imaging.
Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2008-12-01
Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of the point source study showed that the three SR images exhibited similar signal amplitudes and FWHM. The NEMA/IEC study showed that the average difference in SNR among the three SR images was 2.1% with respect to one another and they contained similar noise structure. ISR-1 and ISR-2 can be used to replace CSR, thereby reducing the total SR processing time and memory storage while maintaining similar contrast, resolution, SNR, and noise structure.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Editor); Schenker, Paul (Editor)
1987-01-01
The papers presented in this volume provide an overview of current research in both optical and digital pattern recognition, with a theme of identifying overlapping research problems and methodologies. Topics discussed include image analysis and low-level vision, optical system design, object analysis and recognition, real-time hybrid architectures and algorithms, high-level image understanding, and optical matched filter design. Papers are presented on synthetic estimation filters for a control system; white-light correlator character recognition; optical AI architectures for intelligent sensors; interpreting aerial photographs by segmentation and search; and optical information processing using a new photopolymer.
Combining image-processing and image compression schemes
NASA Technical Reports Server (NTRS)
Greenspan, H.; Lee, M.-C.
1995-01-01
An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.
Chao, Jerry; Ram, Sripad; Ward, E. Sally; Ober, Raimund J.
2014-01-01
The extraction of information from images acquired under low light conditions represents a common task in diverse disciplines. In single molecule microscopy, for example, techniques for superresolution image reconstruction depend on the accurate estimation of the locations of individual particles from generally low light images. In order to estimate a quantity of interest with high accuracy, however, an appropriate model for the image data is needed. To this end, we previously introduced a data model for an image that is acquired using the electron-multiplying charge-coupled device (EMCCD) detector, a technology of choice for low light imaging due to its ability to amplify weak signals significantly above its readout noise floor. Specifically, we proposed the use of a geometrically multiplied branching process to model the EMCCD detector’s stochastic signal amplification. Geometric multiplication, however, can be computationally expensive and challenging to work with analytically. We therefore describe here two approximations for geometric multiplication that can be used instead. The high gain approximation is appropriate when a high level of signal amplification is used, a scenario which corresponds to the typical usage of an EMCCD detector. It is an accurate approximation that is computationally more efficient, and can be used to perform maximum likelihood estimation on EMCCD image data. In contrast, the Gaussian approximation is applicable at all levels of signal amplification, but is only accurate when the initial signal to be amplified is relatively large. As we demonstrate, it can importantly facilitate the analysis of an information-theoretic quantity called the noise coefficient. PMID:25075263
Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2015-04-10
A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.
Image processing and data reduction of Apollo low light level photographs
NASA Technical Reports Server (NTRS)
Alvord, G. C.
1975-01-01
The removal of the lens induced vignetting from a selected sample of the Apollo low light level photographs is discussed. The methods used were developed earlier. A study of the effect of noise on vignetting removal and the comparability of the Apollo 35mm Nikon lens vignetting was also undertaken. The vignetting removal was successful to about 10% photometry, and noise has a severe effect on the useful photometric output data. Separate vignetting functions must be used for different flights since the vignetting function varies from camera to camera in size and shape.
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
Detection of bone disease by hybrid SST-watershed x-ray image segmentation
NASA Astrophysics Data System (ADS)
Sanei, Saeid; Azron, Mohammad; Heng, Ong Sim
2001-07-01
Detection of diagnostic features from X-ray images is favorable due to the low cost of these images. Accurate detection of the bone metastasis region greatly assists physicians to monitor the treatment and to remove the cancerous tissue by surgery. A hybrid SST-watershed algorithm, here, efficiently detects the boundary of the diseased regions. Shortest Spanning Tree (SST), based on graph theory, is one of the most powerful tools in grey level image segmentation. The method converts the images into arbitrary-shape closed segments of distinct grey levels. To do that, the image is initially mapped to a tree. Then using RSST algorithm the image is segmented to a certain number of arbitrary-shaped regions. However, in fine segmentation, over-segmentation causes loss of objects of interest. In coarse segmentation, on the other hand, SST-based method suffers from merging the regions belonged to different objects. By applying watershed algorithm, the large segments are divided into the smaller regions based on the number of catchment's basins for each segment. The process exploits bi-level watershed concept to separate each multi-lobe region into a number of areas each corresponding to an object (in our case a cancerous region of the bone,) disregarding their homogeneity in grey level.
Researching on the process of remote sensing video imagery
NASA Astrophysics Data System (ADS)
Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan
Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.
Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.
2014-01-01
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076
Embedded image processing engine using ARM cortex-M4 based STM32F407 microcontroller
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samaiya, Devesh, E-mail: samaiya.devesh@gmail.com
2014-10-06
Due to advancement in low cost, easily available, yet powerful hardware and revolution in open source software, urge to make newer, more interactive machines and electronic systems have increased manifold among engineers. To make system more interactive, designers need easy to use sensor systems. Giving the boon of vision to machines was never easy, though it is not impossible these days; it is still not easy and expensive. This work presents a low cost, moderate performance and programmable Image processing engine. This Image processing engine is able to capture real time images, can store the images in the permanent storagemore » and can perform preprogrammed image processing operations on the captured images.« less
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.
Waldinger, Robert J; Kensinger, Elizabeth A; Schulz, Marc S
2011-09-01
This study examines whether differences in late-life well-being are linked to how older adults encode emotionally valenced information. Using fMRI with 39 older adults varying in life satisfaction, we examined how viewing positive and negative images would affect activation and connectivity of an emotion-processing network. Participants engaged most regions within this network more robustly for positive than for negative images, but within the PFC this effect was moderated by life satisfaction, with individuals higher in satisfaction showing lower levels of activity during the processing of positive images. Participants high in satisfaction showed stronger correlations among network regions-particularly between the amygdala and other emotion processing regions-when viewing positive, as compared with negative, images. Participants low in satisfaction showed no valence effect. Findings suggest that late-life satisfaction is linked with how emotion-processing regions are engaged and connected during processing of valenced information. This first demonstration of a link between neural recruitment and late-life well-being suggests that differences in neural network activation and connectivity may account for the preferential encoding of positive information seen in some older adults.
Waldinger, Robert J.; Kensinger, Elizabeth A.; Schulz, Marc S.
2013-01-01
This study examines whether differences in late-life well-being are linked to how older adults encode emotionally-valenced information. Using fMRI with 39 older adults varying in life satisfaction, we examined how viewing positive and negative images affected activation and connectivity of an emotion-processing network. Participants engaged most regions within this network more robustly for positive than for negative images, but within the PFC this effect was moderated by life satisfaction, with individuals higher in satisfaction showing lower levels of activity during the processing of positive images. Participants high in satisfaction showed stronger correlations among network regions – particularly between the amygdala and other emotion processing regions – when viewing positive as compared to negative images. Participants low in satisfaction showed no valence effect. Findings suggest that late-life satisfaction is linked with how emotion-processing regions are engaged and connected during processing of valenced information. This first demonstration of a link between neural recruitment and late-life well-being suggests that differences in neural network activation and connectivity may account for the preferential encoding of positive information seen in some older adults. PMID:21590504
Principal curve detection in complicated graph images
NASA Astrophysics Data System (ADS)
Liu, Yuncai; Huang, Thomas S.
2001-09-01
Finding principal curves in an image is an important low level processing in computer vision and pattern recognition. Principal curves are those curves in an image that represent boundaries or contours of objects of interest. In general, a principal curve should be smooth with certain length constraint and allow either smooth or sharp turning. In this paper, we present a method that can efficiently detect principal curves in complicated map images. For a given feature image, obtained from edge detection of an intensity image or thinning operation of a pictorial map image, the feature image is first converted to a graph representation. In graph image domain, the operation of principal curve detection is performed to identify useful image features. The shortest path and directional deviation schemes are used in our algorithm os principal verve detection, which is proven to be very efficient working with real graph images.
Real-time field programmable gate array architecture for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2001-01-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low-level image processing. The field programmable gate array (FPGA)-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and it is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on dedicated very- large-scale-integrated devices to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real-time performance are discussed. Some results are presented and discussed.
Efficient Use of Video for 3d Modelling of Cultural Heritage Objects
NASA Astrophysics Data System (ADS)
Alsadik, B.; Gerke, M.; Vosselman, G.
2015-03-01
Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.
Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I
2010-11-19
Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Sondrup; Gail Heath; Trent Armstrong
2011-04-01
This report presents the seismic refraction results from the depth to bed rock surveys for two areas being considered for the Remote-Handled Low-Level Waste (RH-LLW) disposal facility at the Idaho National Laboratory. The first area (Site 5) surveyed is located southwest of the Advanced Test Reactor Complex and the second (Site 34) is located west of Lincoln Boulevard near the southwest corner of the Idaho Nuclear Technology and Engineering Center (INTEC). At Site 5, large area and smaller-scale detailed surveys were performed. At Site 34, a large area survey was performed. The purpose of the surveys was to define themore » topography of the interface between the surficial alluvium and underlying basalt. Seismic data were first collected and processed using seismic refraction tomographic inversion. Three-dimensional images for both sites were rendered from the data to image the depth and velocities of the subsurface layers. Based on the interpreted top of basalt data at Site 5, a more detailed survey was conducted to refine depth to basalt. This report briefly covers relevant issues in the collection, processing and inversion of the seismic refraction data and in the imaging process. Included are the parameters for inversion and result rendering and visualization such as the inclusion of physical features. Results from the processing effort presented in this report include fence diagrams of the earth model, for the large area surveys and iso-velocity surfaces and cross sections from the detailed survey.« less
Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness
Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.
2017-01-01
Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158
Researches on Position Detection for Vacuum Switch Electrode
NASA Astrophysics Data System (ADS)
Dong, Huajun; Guo, Yingjie; Li, Jie; Kong, Yihan
2018-03-01
Form and transformation character of vacuum arc is important influencing factor on the vacuum switch performance, and the dynamic separations of electrode is the chief effecting factor on the transformation of vacuum arcs forms. Consequently, how to detect the position of electrode to calculate the separations in the arcs image is of great significance. However, gray level distribution of vacuum arcs image isn’t even, the gray level of burning arcs is high, but the gray level of electrode is low, meanwhile, the forms of vacuum arcs changes sharply, the problems above restrict electrode position detection precisely. In this paper, algorithm of detecting electrode position base on vacuum arcs image was proposed. The digital image processing technology was used in vacuum switch arcs image analysis, the upper edge and lower edge were detected respectively, then linear fitting was done using the result of edge detection, the fitting result was the position of electrode, thus, accurate position detection of electrode was realized. From the experimental results, we can see that: algorithm described in this paper detected upper and lower edge of arcs successfully and the position of electrode was obtained through calculation.
Imaging atomic-level random walk of a point defect in graphene
NASA Astrophysics Data System (ADS)
Kotakoski, Jani; Mangler, Clemens; Meyer, Jannik C.
2014-05-01
Deviations from the perfect atomic arrangements in crystals play an important role in affecting their properties. Similarly, diffusion of such deviations is behind many microstructural changes in solids. However, observation of point defect diffusion is hindered both by the difficulties related to direct imaging of non-periodic structures and by the timescales involved in the diffusion process. Here, instead of imaging thermal diffusion, we stimulate and follow the migration of a divacancy through graphene lattice using a scanning transmission electron microscope operated at 60 kV. The beam-activated process happens on a timescale that allows us to capture a significant part of the structural transformations and trajectory of the defect. The low voltage combined with ultra-high vacuum conditions ensure that the defect remains stable over long image sequences, which allows us for the first time to directly follow the diffusion of a point defect in a crystalline material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, W; Jung, J; Kang, Y
Purpose: To quantitatively analyze the influence image processing for Moire elimination has in digital radiography by comparing the image acquired from optimized anti-scattered grid only and the image acquired from software processing paired with misaligned low-frequency grid. Methods: Special phantom, which does not create scattered radiation, was used to acquire non-grid reference images and they were acquired without any grids. A set of images was acquired with optimized grid, aligned to pixel of a detector and other set of images was acquired with misaligned low-frequency grid paired with Moire elimination processing algorithm. X-ray technique used was based on consideration tomore » Bucky factor derived from non-grid reference images. For evaluation, we analyze by comparing pixel intensity of acquired images with grids to that of reference images. Results: When compared to image acquired with optimized grid, images acquired with Moire elimination processing algorithm showed 10 to 50% lower mean contrast value of ROI. Severe distortion of images was found with when the object’s thickness was measured at 7 or less pixels. In this case, contrast value measured from images acquired with Moire elimination processing algorithm was under 30% of that taken from reference image. Conclusion: This study shows the potential risk of Moire compensation images in diagnosis. Images acquired with misaligned low-frequency grid results in Moire noise and Moire compensation processing algorithm used to remove this Moire noise actually caused an image distortion. As a result, fractures and/or calcifications which are presented in few pixels only may not be diagnosed properly. In future work, we plan to evaluate the images acquired without grid but based on 100% image processing and the potential risks it possesses.« less
Molecular imaging and the unification of multilevel mechanisms and data in medical physics.
Nikiforidis, George C; Sakellaropoulos, George C; Kagadis, George C
2008-08-01
Molecular imaging (MI) constitutes a recently developed approach of imaging, where modalities and agents have been reinvented and used in novel combinations in order to expose and measure biologic processes occurring at molecular and cellular levels. It is an approach that bridges the gap between modalities acquiring data from high (e.g., computed tomography, magnetic resonance imaging, and positron-emitting isotopes) and low (e.g., PCR, microarrays) levels of a biological organization. While data integration methodologies will lead to improved diagnostic and prognostic performance, interdisciplinary collaboration, triggered by MI, will result in a better perception of the underlying biological mechanisms. Toward the development of a unifying theory describing these mechanisms, medical physicists can formulate new hypotheses, provide the physical constraints bounding them, and consequently design appropriate experiments. Their new scientific and working environment calls for interventions in their syllabi to educate scientists with enhanced capabilities for holistic views and synthesis.
NASA Astrophysics Data System (ADS)
Rigos, Anastasios; Vaiopoulos, Aristidis; Skianis, George; Tsekouras, George; Drakopoulos, Panos
2015-04-01
Tracking coastline changes is a crucial task in the context of coastal management and synoptic remotely sensed data has become an essential tool for this purpose. In this work, and within the framework of BeachTour project, we introduce a new method for shoreline extraction from high resolution satellite images. It was applied on two images taken by the WorldView-2 satellite (7 channels, 2m resolution) during July 2011 and August 2014. The location is the well-known tourist destination of Laganas beach spanning 5 km along the southern part of Zakynthos Island, Greece. The atmospheric correction was performed with the ENVI FLAASH procedure and the final images were validated against hyperspectral field measurements. Using three channels (CH2=blue, CH3=green and CH7=near infrared) the Modified Redness Index image was calculated according to: MRI=(CH7)2/[CH2x(CH3)3]. MRI has the property that its value keeps increasing as the water becomes shallower. This is followed by an abrupt reduction trend at the location of the wet sand up to the point where the dry shore face begins. After that it remains low-valued throughout the beach zone. Images based on this index were used for the shoreline extraction process that included the following steps: a) On the MRI based image, only an area near the shoreline was kept (this process is known as image masking). b) On the masked image the Canny edge detector operator was applied. c) Of all edges discovered on step (b) only the biggest was kept. d) If the line revealed on step (c) was unacceptable, i.e. not defining the shoreline or defining only part of it, then either more than one areas on step (c) were kept or on the MRI image the pixel values were bound in a particular interval [Blow, Bhigh] and only the ones belonging in this interval were kept. Then, steps (a)-(d) were repeated. Using this method, which is still under development, we were able to extract the shoreline position and reveal its changes during the 3-year period. Although simple, with minimal human interaction and low level programming, this method can provide precise coastlines with just one pixel resolution. Images covering more locations are currently under consideration.
The effect of image processing on the detection of cancers in digital mammography.
Warren, Lucy M; Given-Wilson, Rosalind M; Wallis, Matthew G; Cooke, Julie; Halling-Brown, Mark D; Mackenzie, Alistair; Chakraborty, Dev P; Bosmans, Hilde; Dance, David R; Young, Kenneth C
2014-08-01
OBJECTIVE. The objective of our study was to investigate the effect of image processing on the detection of cancers in digital mammography images. MATERIALS AND METHODS. Two hundred seventy pairs of breast images (both breasts, one view) were collected from eight systems using Hologic amorphous selenium detectors: 80 image pairs showed breasts containing subtle malignant masses; 30 image pairs, biopsy-proven benign lesions; 80 image pairs, simulated calcification clusters; and 80 image pairs, no cancer (normal). The 270 image pairs were processed with three types of image processing: standard (full enhancement), low contrast (intermediate enhancement), and pseudo-film-screen (no enhancement). Seven experienced observers inspected the images, locating and rating regions they suspected to be cancer for likelihood of malignancy. The results were analyzed using a jackknife-alternative free-response receiver operating characteristic (JAFROC) analysis. RESULTS. The detection of calcification clusters was significantly affected by the type of image processing: The JAFROC figure of merit (FOM) decreased from 0.65 with standard image processing to 0.63 with low-contrast image processing (p = 0.04) and from 0.65 with standard image processing to 0.61 with film-screen image processing (p = 0.0005). The detection of noncalcification cancers was not significantly different among the image-processing types investigated (p > 0.40). CONCLUSION. These results suggest that image processing has a significant impact on the detection of calcification clusters in digital mammography. For the three image-processing versions and the system investigated, standard image processing was optimal for the detection of calcification clusters. The effect on cancer detection should be considered when selecting the type of image processing in the future.
A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading
NASA Astrophysics Data System (ADS)
Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.
2018-05-01
Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.
Modulation of microsaccades by spatial frequency during object categorization.
Craddock, Matt; Oppermann, Frank; Müller, Matthias M; Martinovic, Jasna
2017-01-01
The organization of visual processing into a coarse-to-fine information processing based on the spatial frequency properties of the input forms an important facet of the object recognition process. During visual object categorization tasks, microsaccades occur frequently. One potential functional role of these eye movements is to resolve high spatial frequency information. To assess this hypothesis, we examined the rate, amplitude and speed of microsaccades in an object categorization task in which participants viewed object and non-object images and classified them as showing either natural objects, man-made objects or non-objects. Images were presented unfiltered (broadband; BB) or filtered to contain only low (LSF) or high spatial frequency (HSF) information. This allowed us to examine whether microsaccades were modulated independently by the presence of a high-level feature - the presence of an object - and by low-level stimulus characteristics - spatial frequency. We found a bimodal distribution of saccades based on their amplitude, with a split between smaller and larger microsaccades at 0.4° of visual angle. The rate of larger saccades (⩾0.4°) was higher for objects than non-objects, and higher for objects with high spatial frequency content (HSF and BB objects) than for LSF objects. No effects were observed for smaller microsaccades (<0.4°). This is consistent with a role for larger microsaccades in resolving HSF information for object identification, and previous evidence that more microsaccades are directed towards informative image regions. Copyright © 2016 Elsevier Ltd. All rights reserved.
InGaAs focal plane array developments at III-V Lab
NASA Astrophysics Data System (ADS)
Rouvié, Anne; Reverchon, Jean-Luc; Huet, Odile; Djedidi, Anis; Robo, Jean-Alexandre; Truffer, Jean-Patrick; Bria, Toufiq; Pires, Mauricio; Decobert, Jean; Costard, Eric
2012-06-01
SWIR detection band benefits from natural (sun, night glow, thermal radiation) or artificial (eye safe lasers) photons sources combined to low atmospheric absorption and specific contrast compared to visible wavelengths. It gives the opportunity to address a large spectrum of applications such as defense and security (night vision, active imaging), space (earth observation), transport (automotive safety) or industry (non destructive process control). InGaAs material appears as a good candidate to satisfy SWIR detection needs. The lattice matching with InP constitutes a double advantage to this material: attractive production capacity and uncooled operation thanks to low dark current level induced by high quality material. For few years, III-VLab has been studying InGaAs imagery, gathering expertise in InGaAs material growth and imaging technology respectively from Alcatel-Lucent and Thales, its two mother companies. This work has lead to put quickly on the market a 320x256 InGaAs module, exhibiting high performances in terms of dark current, uniformity and quantum efficiency. In this paper, we present the last developments achieved in our laboratory, mainly focused on increasing the pixels number to VGA format associated to pixel pitch decrease (15μm) and broadening detection spectrum toward visible wavelengths. Depending on targeted applications, different Read Out Integrated Circuits (ROIC) have been used. Low noise ROIC have been developed by CEA LETI to fit the requirements of low light level imaging whereas logarithmic ROIC designed by NIT allows high dynamic imaging adapted for automotive safety.
Multispectral imaging approach for simplified non-invasive in-vivo evaluation of gingival erythema
NASA Astrophysics Data System (ADS)
Eckhard, Timo; Valero, Eva M.; Nieves, Juan L.; Gallegos-Rueda, José M.; Mesa, Francisco
2012-03-01
Erythema is a common visual sign of gingivitis. In this work, a new and simple low-cost image capture and analysis method for erythema assessment is proposed. The method is based on digital still images of gingivae and applied on a pixel-by-pixel basis. Multispectral images are acquired with a conventional digital camera and multiplexed LED illumination panels at 460nm and 630nm peak wavelength. An automatic work-flow segments teeth from gingiva regions in the images and creates a map of local blood oxygenation levels, which relates to the presence of erythema. The map is computed from the ratio of the two spectral images. An advantage of the proposed approach is that the whole process is easy to manage by dental health care professionals in clinical environment.
Modular low-light microscope for imaging cellular bioluminescence and radioluminescence
Kim, Tae Jin; Türkcan, Silvan; Pratx, Guillem
2017-01-01
Low-light microscopy methods are receiving increased attention as new applications have emerged. One such application is to allow longitudinal imaging of light-sensitive cells with no phototoxicity and no photobleaching of fluorescent biomarkers. Another application is for imaging signals that are inherently dim and undetectable using standard microscopy, such as bioluminescence, chemiluminescence, or radioluminescence. In this protocol, we provide instructions on how to build a modular low-light microscope (1-4 d) by coupling two microscope objective lenses, back-to-back from each other, using standard optomechanical components. We also provide directions on how to image dim signals such as radioluminescence (1-1.5 h), bioluminescence (∼30 min) and low-excitation fluorescence (∼15 min). In particular, radioluminescence microscopy is explained in detail as it is a newly developed technique, which enables the study of small molecule transport (eg. radiolabeled drugs, metabolic precursors, and nuclear medicine contrast agents) by single cells without perturbing endogenous biochemical processes. In this imaging technique, a scintillator crystal (eg. CdWO4) is placed in close proximity to the radiolabeled cells, where it converts the radioactive decays into optical flashes detectable using a sensitive camera. Using the image reconstruction toolkit provided in this protocol, the flashes can be reconstructed to yield high-resolution image of the radiotracer distribution. With appropriate timing, the three aforementioned imaging modalities may be performed altogether on a population of live cells, allowing the user to perform parallel functional studies of cell heterogeneity at the single-cell level. PMID:28426025
Adlington, Rebecca L; Laws, Keith R; Gale, Tim M
2009-10-01
It has been suggested that object recognition in patients with Alzheimer's disease (AD) may be strongly influenced both by image format (e.g. colour vs. line-drawn) and by low-level visual impairments. To examine these notions, we tested basic visual functioning and picture naming in 41 AD patients and 40 healthy elderly controls. Picture naming was examined using 105 images representing a wide range of living and nonliving subcategories (from the Hatfield image test [HIT]: [Adlington, R. A., Laws, K. R., & Gale, T. M. (in press). The Hatfield image test (HIT): A new picture test and norms for experimental and clinical use. Journal of Clinical and Experimental Neuropsychology]), with each item presented in colour, greyscale, or line-drawn formats. Whilst naming for elderly controls improved linearly with the addition of surface detail and colour, AD patients showed no benefit from the addition of either surface information or colour. Additionally, controls showed a significant category by format interaction; however, the same profile did not emerge for AD patients. Finally, AD patients showed widespread and significant impairment on tasks of visual functioning, and low-level visual impairment was predictive of patient naming.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Lau, W.; Baker, R.
2004-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated with a coupled mesoscale atmospheric model (MM5) and a detailed land surface model. The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The simulation with the land surface model captured basic signatures of the monsoon onset processes and associated rainfall statistics. The sensitivity tests indicated that land surface processes had a greater impact on the simulated rainfall results than that of a small sea surface temperature change during the onset period. In both the 1997 and 1998 cases, the simulations were significantly improved by including the land surface processes. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation; the southwest low-level flow over the Indo-China peninsula and the northern cold front intrusion from southern China. The surface sensible and latent heat exchange between the land and atmosphere modified the low-level temperature distribution and gradient, and therefore the low-level. The more realistic forcing of the sensible and latent heat from the detailed land surface model improved the monsoon rainfall and associated wind simulation. The model results will be compared to the simulation of the 6-7 May 2000 Missouri flash flood event. In addition, the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation will be examined.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Wang, Y.; Lau, W.; Baker, R. D.
2004-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated with a coupled mesoscale atmospheric model (MM5) and a detailed land surface model. The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The simulation with the land surface model captured basic signatures of the monsoon onset processes and associated rainfall statistics. The sensitivity tests indicated that land surface processes had a greater impact on the simulated rainfall results than that of a small sea surface temperature change during the onset period. In both the 1997 and 1998 cases, the simulations were significantly improved by including the land surface processes. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation; the southwest low-level flow over the Indo-China peninsula and the northern cold front intrusion from southern China. The surface sensible and latent heat exchange between the land and atmosphere modified the low-level temperature distribution and gradient, and therefore the low-level. The more realistic forcing of the sensible and latent heat from the detailed land surface model improved the monsoon rainfall and associated wind simulation. The model results will be compared to the simulation of the 6-7 May 2000 Missouri flash flood event. In addition, the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation will be examined.
Ageing and proton irradiation damage of a low voltage EMCCD in a CMOS process
NASA Astrophysics Data System (ADS)
Dunford, A.; Stefanov, K.; Holland, A.
2018-02-01
Electron Multiplying Charge Coupled Devices (EMCCDs) have revolutionised low light level imaging, providing highly sensitive detection capabilities. Implementing Electron Multiplication (EM) in Charge Coupled Devices (CCDs) can increase the Signal to Noise Ratio (SNR) and lead to further developments in low light level applications such as improvements in image contrast and single photon imaging. Demand has grown for EMCCD devices with properties traditionally restricted to Complementary Metal-Oxide-Semiconductor (CMOS) image sensors, such as lower power consumption and higher radiation tolerance. However, EMCCDs are known to experience an ageing effect, such that the gain gradually decreases with time. This paper presents results detailing EM ageing in an Electron Multiplying Complementary Metal-Oxide-Semiconductor (EMCMOS) device and its effect on several device characteristics such as Charge Transfer Inefficiency (CTI) and thermal dark signal. When operated at room temperature an average decrease in gain of over 20% after an operational period of 175 hours was detected. With many image sensors deployed in harsh radiation environments, the radiation hardness of the device following proton irradiation was also tested. This paper presents the results of a proton irradiation completed at the Paul Scherrer Institut (PSI) at a 10 MeV equivalent fluence of 4.15× 1010 protons/cm2. The pre-irradiation characterisation, irradiation methodology and post-irradiation results are detailed, demonstrating an increase in dark current and a decrease in its activation energy. Finally, this paper presents a comparison of the damage caused by EM gain ageing and proton irradiation.
Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.
Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H
2015-05-01
Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.
Improving human object recognition performance using video enhancement techniques
NASA Astrophysics Data System (ADS)
Whitman, Lucy S.; Lewis, Colin; Oakley, John P.
2004-12-01
Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.
Animal Detection in Natural Images: Effects of Color and Image Database
Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.
2013-01-01
The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744
Influence of signal intensity non-uniformity on brain volumetry using an atlas-based method.
Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni
2012-01-01
Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.
Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method
Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni
2012-01-01
Objective Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Materials and Methods Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. Results A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. Conclusion The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials. PMID:22778560
Design of signal reception and processing system of embedded ultrasonic endoscope
NASA Astrophysics Data System (ADS)
Li, Ming; Yu, Feng; Zhang, Ruiqiang; Li, Yan; Chen, Xiaodong; Yu, Daoyin
2009-11-01
Embedded Ultrasonic Endoscope, based on embedded microprocessor and embedded real-time operating system, sends a micro ultrasonic probe into coelom through the biopsy channel of the Electronic Endoscope to get the fault histology features of digestive organs by rotary scanning, and acquires the pictures of the alimentary canal mucosal surface. At the same time, ultrasonic signals are processed by signal reception and processing system, forming images of the full histology of the digestive organs. Signal Reception and Processing System is an important component of Embedded Ultrasonic Endoscope. However, the traditional design, using multi-level amplifiers and special digital processing circuits to implement signal reception and processing, is no longer satisfying the standards of high-performance, miniaturization and low power requirements that embedded system requires, and as a result of the high noise that multi-level amplifier brought, the extraction of small signal becomes hard. Therefore, this paper presents a method of signal reception and processing based on double variable gain amplifier and FPGA, increasing the flexibility and dynamic range of the Signal Reception and Processing System, improving system noise level, and reducing power consumption. Finally, we set up the embedded experiment system, using a transducer with the center frequency of 8MHz to scan membrane samples, and display the image of ultrasonic echo reflected by each layer of membrane, with a frame rate of 5Hz, verifying the correctness of the system.
Imaging Arrays With Improved Transmit Power Capability
Zipparo, Michael J.; Bing, Kristin F.; Nightingale, Kathy R.
2010-01-01
Bonded multilayer ceramics and composites incorporating low-loss piezoceramics have been applied to arrays for ultrasound imaging to improve acoustic transmit power levels and to reduce internal heating. Commercially available hard PZT from multiple vendors has been characterized for microstructure, ability to be processed, and electroacoustic properties. Multilayers using the best materials demonstrate the tradeoffs compared with the softer PZT5-H typically used for imaging arrays. Three-layer PZT4 composites exhibit an effective dielectric constant that is three times that of single layer PZT5H, a 50% higher mechanical Q, a 30% lower acoustic impedance, and only a 10% lower coupling coefficient. Application of low-loss multilayers to linear phased and large curved arrays results in equivalent or better element performance. A 3-layer PZT4 composite array achieved the same transmit intensity at 40% lower transmit voltage and with a 35% lower face temperature increase than the PZT-5 control. Although B-mode images show similar quality, acoustic radiation force impulse (ARFI) images show increased displacement for a given drive voltage. An increased failure rate for the multilayers following extended operation indicates that further development of the bond process will be necessary. In conclusion, bonded multilayer ceramics and composites allow additional design freedom to optimize arrays and improve the overall performance for increased acoustic output while maintaining image quality. PMID:20875996
Model-Based Building Detection from Low-Cost Optical Sensors Onboard Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Karantzalos, K.; Koutsourakis, P.; Kalisperakis, I.; Grammatikopoulos, L.
2015-08-01
The automated and cost-effective building detection in ultra high spatial resolution is of major importance for various engineering and smart city applications. To this end, in this paper, a model-based building detection technique has been developed able to extract and reconstruct buildings from UAV aerial imagery and low-cost imaging sensors. In particular, the developed approach through advanced structure from motion, bundle adjustment and dense image matching computes a DSM and a true orthomosaic from the numerous GoPro images which are characterised by important geometric distortions and fish-eye effect. An unsupervised multi-region, graphcut segmentation and a rule-based classification is responsible for delivering the initial multi-class classification map. The DTM is then calculated based on inpaininting and mathematical morphology process. A data fusion process between the detected building from the DSM/DTM and the classification map feeds a grammar-based building reconstruction and scene building are extracted and reconstructed. Preliminary experimental results appear quite promising with the quantitative evaluation indicating detection rates at object level of 88% regarding the correctness and above 75% regarding the detection completeness.
Fast algorithm of low power image reformation for OLED display
NASA Astrophysics Data System (ADS)
Lee, Myungwoo; Kim, Taewhan
2014-04-01
We propose a fast algorithm of low-power image reformation for organic light-emitting diode (OLED) display. The proposed algorithm scales the image histogram in a way to reduce power consumption in OLED display by remapping the gray levels of the pixels in the image based on the fast analysis of the histogram of the input image while maintaining contrast of the image. The key idea is that a large number of gray levels are never used in the images and these gray levels can be effectively exploited to reduce power consumption. On the other hand, to maintain the image contrast the gray level remapping is performed by taking into account the object size in the image to which each gray level is applied, that is, reforming little for the gray levels in the objects of large size. Through experiments with 24 Kodak images, it is shown that our proposed algorithm is able to reduce the power consumption by 10% even with 9% contrast enhancement. Our algorithm runs in a linear time so that it can be applied to moving pictures with high resolution.
Low-power laser effects at the single-cell level: a confocal microscopy study
NASA Astrophysics Data System (ADS)
Alexandratou, Eleni; Yova, Dido M.; Atlamazoglou, Vassilis; Handris, Panagiotis; Kletsas, Dimitris; Loukas, Spyros
2000-11-01
Confocal microscopy was used for irradiation and observation of the same area of interest, allowing the imaging of low power laser effects in subcellular components and functions, at the single cell level. Coverslips cultures of human fetal foreskin fibroblasts (HFFF2) were placed in a small incubation chamber for in vivo microscopic observation. Cells were stimulated by the 647 nm line of the Argon- Krypton laser of the confocal microscope (0.1 mW/cm2). Membrane permeability, mitochondrial membrane potential ((delta) Psim), intracellular pHi, calcium alterations and nuclear chromatin accessibility were monitored, at different times after irradiation, using specific fluorescent vital probes. Images were stored to the computer and quantitative evaluation was performed using image- processing software. After irradiation, influx and efflux of the appropriate dyes monitored changes in cell membrane permeability. Laser irradiation caused alkalizatoin of the cytosolic pHi and increase of the mitochondrial membrane potential ((delta) Psim). Temporary global Ca2+ responses were also observed. No such effects were noted in microscopic fields other than the irradiated ones. No toxic effects were observed, during time course of the experiment.
Leube, Dirk T; Yoon, Hyo Woon; Rapp, Alexander; Erb, Michael; Grodd, Wolfgang; Bartels, Mathias; Kircher, Tilo T J
2003-05-22
Perception of upright faces relies on configural processing. Therefore recognition of inverted, compared to upright faces is impaired. In a functional magnetic resonance imaging experiment we investigated the neural correlate of a face inversion task. Thirteen healthy subjects were presented with a equal number of upright and inverted faces alternating with a low level baseline with an upright and inverted picture of an abstract symbol. Brain activation was calculated for upright minus inverted faces. For this differential contrast, we found a signal change in the right superior temporal sulcus and right insula. Configural properties are processed in a network comprising right superior temporal and insular cortex.
Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng
2015-07-28
Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
New solutions and technologies for uncooled infrared imaging
NASA Astrophysics Data System (ADS)
Rollin, Joël.; Diaz, Frédéric; Fontaine, Christophe; Loiseaux, Brigitte; Lee, Mane-Si Laure; Clienti, Christophe; Lemonnier, Fabrice; Zhang, Xianghua; Calvez, Laurent
2013-06-01
The military uncooled infrared market is driven by the continued cost reduction of the focal plane arrays whilst maintaining high standards of sensitivity and steering towards smaller pixel sizes. As a consequence, new optical solutions are called for. Two approaches can come into play: the bottom up option consists in allocating improvements to each contributor and the top down process rather relies on an overall optimization of the complete image channel. The University of Rennes I with Thales Angénieux alongside has been working over the past decade through French MOD funding's, on low cost alternatives of infrared materials based upon chalcogenide glasses. A special care has been laid on the enhancement of their mechanical properties and their ability to be moulded according to complex shapes. New manufacturing means developments capable of better yields for the raw materials will be addressed, too. Beyond the mere lenses budget cuts, a wave front coding process can ease a global optimization. This technic gives a way of relaxing optical constraints or upgrading thermal device performances through an increase of the focus depths and desensitization against temperature drifts: it combines image processing and the use of smart optical components. Thales achievements in such topics will be enlightened and the trade-off between image quality correction levels and low consumption/ real time processing, as might be required in hand-free night vision devices, will be emphasized. It is worth mentioning that both approaches are deeply leaning on each other.
SkySat-1: very high-resolution imagery from a small satellite
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk
2014-10-01
This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.
Modeling global scene factors in attention
NASA Astrophysics Data System (ADS)
Torralba, Antonio
2003-07-01
Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America
NASA Astrophysics Data System (ADS)
Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo
2018-02-01
We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.
Shortcomings of low-cost imaging systems for viewing computed radiographs.
Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N
2000-01-01
To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.
Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E
2016-01-01
Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.
Neurons in the human hippocampus and amygdala respond to both low- and high-level image properties
Cabrales, Elaine; Wilson, Michael S.; Baker, Christopher P.; Thorp, Christopher K.; Smith, Kris A.; Treiman, David M.
2011-01-01
A large number of studies have demonstrated that structures within the medial temporal lobe, such as the hippocampus, are intimately involved in declarative memory for objects and people. Although these items are abstractions of the visual scene, specific visual details can change the speed and accuracy of their recall. By recording from 415 neurons in the hippocampus and amygdala of human epilepsy patients as they viewed images drawn from 10 image categories, we showed that the firing rates of 8% of these neurons encode image illuminance and contrast, low-level properties not directly pertinent to task performance, whereas in 7% of the neurons, firing rates encode the category of the item depicted in the image, a high-level property pertinent to the task. This simultaneous representation of high- and low-level image properties within the same brain areas may serve to bind separate aspects of visual objects into a coherent percept and allow episodic details of objects to influence mnemonic performance. PMID:21471400
Broadband seismic : case study modeling and data processing
NASA Astrophysics Data System (ADS)
Cahyaningtyas, M. B.; Bahar, A.
2018-03-01
Seismic data with wide range of frequency is needed due to its close relation to resolution and the depth of the target. Low frequency provides deeper penetration for the imaging of deep target. In addition, the wider the frequency bandwidth, the sharper the wavelet. Sharp wavelet is responsible for high-resolution imaging and is very helpful to resolve thin bed. As a result, the demand for broadband seismic data is rising and it spurs the technology development of broadband seismic in oil and gas industry. An obstacle that is frequently found on marine seismic data is the existence of ghost that affects the frequency bandwidth contained on the seismic data. Ghost alters bandwidth to bandlimited. To reduce ghost effect and to acquire broadband seismic data, lots of attempts are used, both on the acquisition and on the processing of seismic data. One of the acquisition technique applied is the multi-level streamer, where some streamers are towed on some levels of depth. Multi-level streamer will yield data with varied ghost notch shown on frequency domain. If the ghost notches are not overlapping, the summation of multi-level streamer data will reduce the ghost effect. The result of the multi-level streamer data processing shows that reduction of ghost notch on frequency domain indeed takes place.
NASA Astrophysics Data System (ADS)
Turpin, Terry M.; Lafuse, James L.
1993-02-01
ImSynTM is an image synthesis technology, developed and patented by Essex Corporation. ImSynTM can provide compact, low cost, and low power solutions to some of the most difficult image synthesis problems existing today. The inherent simplicity of ImSynTM enables the manufacture of low cost and reliable photonic systems for imaging applications ranging from airborne reconnaissance to doctor's office ultrasound. The initial application of ImSynTM technology has been to SAR processing; however, it has a wide range of applications such as: image correlation, image compression, acoustic imaging, x-ray tomographic (CAT, PET, SPECT), magnetic resonance imaging (MRI), microscopy, range- doppler mapping (extended TDOA/FDOA). This paper describes ImSynTM in terms of synthetic aperture microscopy and then shows how the technology can be extended to ultrasound and synthetic aperture radar. The synthetic aperture microscope (SAM) enables high resolution three dimensional microscopy with greater dynamic range than real aperture microscopes. SAM produces complex image data, enabling the use of coherent image processing techniques. Most importantly SAM produces the image data in a form that is easily manipulated by a digital image processing workstation.
Babu, Harish; Lagman, Carlito; Kim, Terrence T.; Grode, Marshall; Johnson, J. Patrick; Drazin, Doniel
2017-01-01
Background: Bertolotti's syndrome is characterized by enlargement of the transverse process at the most caudal lumbar vertebra with a pseudoarticulation between the transverse process and sacral ala. Here, we describe the use of intraoperative three-dimensional image-guided navigation in the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Case Descriptions: Two patients diagnosed with Bertolotti's syndrome who had undergone the above-mentioned procedure were identified. The patients were 17- and 38-years-old, and presented with severe, chronic low back pain that was resistant to conservative treatment. Imaging revealed lumbosacral transitional vertebrae at the level of L5-S1, which was consistent with Bertolotti's syndrome. Injections of the pseudoarticulations resulted in only temporary symptomatic relief. Thus, the patients subsequently underwent O-arm neuronavigational resection of the bony defects. Both patients experienced immediate pain resolution (documented on the postoperative notes) and remained asymptomatic 1 year later. Conclusion: Intraoperative three-dimensional imaging and navigation guidance facilitated the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Excellent outcomes were achieved in both patients. PMID:29026672
Babu, Harish; Lagman, Carlito; Kim, Terrence T; Grode, Marshall; Johnson, J Patrick; Drazin, Doniel
2017-01-01
Bertolotti's syndrome is characterized by enlargement of the transverse process at the most caudal lumbar vertebra with a pseudoarticulation between the transverse process and sacral ala. Here, we describe the use of intraoperative three-dimensional image-guided navigation in the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Two patients diagnosed with Bertolotti's syndrome who had undergone the above-mentioned procedure were identified. The patients were 17- and 38-years-old, and presented with severe, chronic low back pain that was resistant to conservative treatment. Imaging revealed lumbosacral transitional vertebrae at the level of L5-S1, which was consistent with Bertolotti's syndrome. Injections of the pseudoarticulations resulted in only temporary symptomatic relief. Thus, the patients subsequently underwent O-arm neuronavigational resection of the bony defects. Both patients experienced immediate pain resolution (documented on the postoperative notes) and remained asymptomatic 1 year later. Intraoperative three-dimensional imaging and navigation guidance facilitated the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Excellent outcomes were achieved in both patients.
Novelli, M D; Barreto, E; Matos, D; Saad, S S; Borra, R C
1997-01-01
The authors present the experimental results of the computerized quantifying of tissular structures involved in the reparative process of colonic anastomosis performed by manual suture and biofragmentable ring. The quantified variables in this study were: oedema fluid, myofiber tissue, blood vessel and cellular nuclei. An image processing software developed at Laboratório de Informática Dedicado à Odontologia (LIDO) was utilized to quantifying the pathognomonic alterations in the inflammatory process in colonic anastomosis performed in 14 dogs. The results were compared to those obtained through traditional way diagnosis by two pathologists in view of counterproof measures. The criteria for these diagnoses were defined in levels represented by absent, light, moderate and intensive which were compared to analysis performed by the computer. There was significant statistical difference between two techniques: the biofragmentable ring technique exhibited low oedema fluid, organized myofiber tissue and higher number of alongated cellular nuclei in relation to manual suture technique. The analysis of histometric variables through computational image processing was considered efficient and powerful to quantify the main tissular inflammatory and reparative changing.
NASA Technical Reports Server (NTRS)
Wang, Yansen; Tao, W.-K.; Lau, K.-M.; Wetzel, Peter J.
2003-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated with a coupled mesoscale atmospheric model (MM5) and a detailed land surface model. The rainfall results from the simulations were compared with observed satellite data fiom the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The simulation with the land surface model captured basic signatures of the monsoon onset processes and associated rainfall statistics. The sensitivity tests indicated that land surface processes had a greater impact on the simulated rainfall results than that of a small sea surface temperature change during the onset period. In both the 1997 and 1998 cases, the simulations were significantly improved by including the land surface processes. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation; the southwest low-level flow over the Indo- China peninsula and the northern cold front intrusion from southern China. The surface sensible and latent heat exchange between the land and atmosphere modified the lowlevel temperature distribution and gradient, and therefore the low-level. The more realistic forcing of the sensible and latent heat from the detailed land surface model improved the monsoon rainfall and associated wind simulation.
Vonasek, Erica
2015-01-01
Microbial pathogen infiltration in fresh leafy greens is a significant food safety risk factor. In various postharvest operations, vacuum cooling is a critical process for maintaining the quality of fresh produce. The overall goal of this study was to evaluate the risk of vacuum cooling-induced infiltration of Escherichia coli O157:H7 into lettuce using multiphoton microscopy. Multiphoton imaging was chosen as the method to locate E. coli O157:H7 within an intact lettuce leaf due to its high spatial resolution, low background fluorescence, and near-infrared (NIR) excitation source compared to those of conventional confocal microscopy. The variables vacuum cooling, surface moisture, and leaf side were evaluated in a three-way factorial study with E. coli O157:H7 on lettuce. A total of 188 image stacks were collected. The images were analyzed for E. coli O157:H7 association with stomata and E. coli O157:H7 infiltration. The quantitative imaging data were statistically analyzed using analysis of variance (ANOVA). The results indicate that the low-moisture condition led to an increased risk of microbial association with stomata (P < 0.05). Additionally, the interaction between vacuum cooling levels and moisture levels led to an increased risk of infiltration (P < 0.05). This study also demonstrates the potential of multiphoton imaging for improving sensitivity and resolution of imaging-based measurements of microbial interactions with intact leaf structures, including infiltration. PMID:26475109
Recognition of Roasted Coffee Bean Levels using Image Processing and Neural Network
NASA Astrophysics Data System (ADS)
Nasution, T. H.; Andayani, U.
2017-03-01
The coffee beans roast levels have some characteristics. However, some people cannot recognize the coffee beans roast level. In this research, we propose to design a method to recognize the coffee beans roast level of images digital by processing the image and classifying with backpropagation neural network. The steps consist of how to collect the images data with image acquisition, pre-processing, feature extraction using Gray Level Co-occurrence Matrix (GLCM) method and finally normalization of data extraction using decimal scaling features. The values of decimal scaling features become an input of classifying in backpropagation neural network. We use the method of backpropagation to recognize the coffee beans roast levels. The results showed that the proposed method is able to identify the coffee roasts beans level with an accuracy of 97.5%.
Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method
Chen, Meizhu; Li, Jichun; Zhang, Encai
2017-01-01
As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) model with the local intensity property introduced by the Local Binary fitting (LBF) model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets). The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods. PMID:28840122
Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.
Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe
2015-07-01
Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.
Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer
Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954
Advancing Bag-of-Visual-Words Representations for Lesion Classification in Retinal Images
Pires, Ramon; Jelinek, Herbert F.; Wainer, Jacques; Valle, Eduardo; Rocha, Anderson
2014-01-01
Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.22.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors. PMID:24886780
A method for smoothing segmented lung boundary in chest CT images
NASA Astrophysics Data System (ADS)
Yim, Yeny; Hong, Helen
2007-03-01
To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.
Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.
2012-01-01
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038
Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One
2016-03-01
CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.
Wavelet imaging cleaning method for atmospheric Cherenkov telescopes
NASA Astrophysics Data System (ADS)
Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.
2002-07-01
We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.
Automatic anatomy recognition in whole-body PET/CT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Huiqian; Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Odhner, Dewey
Purpose: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity ofmore » anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Image Anal. 18, 752–771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. Methods: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image. Results: Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible. Conclusions: The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.« less
Harris, Joseph A; Wu, Chien-Te; Woldorff, Marty G
2011-06-07
It is generally agreed that considerable amounts of low-level sensory processing of visual stimuli can occur without conscious awareness. On the other hand, the degree of higher level visual processing that occurs in the absence of awareness is as yet unclear. Here, event-related potential (ERP) measures of brain activity were recorded during a sandwich-masking paradigm, a commonly used approach for attenuating conscious awareness of visual stimulus content. In particular, the present study used a combination of ERP activation contrasts to track both early sensory-processing ERP components and face-specific N170 ERP activations, in trials with versus without awareness. The electrophysiological measures revealed that the sandwich masking abolished the early face-specific N170 neural response (peaking at ~170 ms post-stimulus), an effect that paralleled the abolition of awareness of face versus non-face image content. Furthermore, however, the masking appeared to render a strong attenuation of earlier feedforward visual sensory-processing signals. This early attenuation presumably resulted in insufficient information being fed into the higher level visual system pathways specific to object category processing, thus leading to unawareness of the visual object content. These results support a coupling of visual awareness and neural indices of face processing, while also demonstrating an early low-level mechanism of interference in sandwich masking.
A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
2016-01-01
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101
Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass.
Ayoola, Idowu; Chen, Wei; Feijs, Loe
2015-09-18
A major problem related to chronic health is patients' "compliance" with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean.
Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass
Ayoola, Idowu; Chen, Wei; Feijs, Loe
2015-01-01
A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600
Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion
Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang
2016-01-01
Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313
A strategy to optimize CT pediatric dose with a visual discrimination model
NASA Astrophysics Data System (ADS)
Gutierrez, Daniel; Gudinchet, François; Alamo-Maestre, Leonor T.; Bochud, François O.; Verdun, Francis R.
2008-03-01
Technological developments of computed tomography (CT) have led to a drastic increase of its clinical utilization, creating concerns about patient exposure. To better control dose to patients, we propose a methodology to find an objective compromise between dose and image quality by means of a visual discrimination model. A GE LightSpeed-Ultra scanner was used to perform the acquisitions. A QRM 3D low contrast resolution phantom (QRM - Germany) was scanned using CTDI vol values in the range of 1.7 to 103 mGy. Raw data obtained with the highest CTDI vol were afterwards processed to simulate dose reductions by white noise addition. Noise realism of the simulations was verified by comparing normalized noise power spectra aspect and amplitudes (NNPS) and standard deviation measurements. Patient images were acquired using the Diagnostic Reference Levels (DRL) proposed in Switzerland. Noise reduction was then simulated, as for the QRM phantom, to obtain five different CTDI vol levels, down to 3.0 mGy. Image quality of phantom images was assessed with the Sarnoff JNDmetrix visual discrimination model and compared to an assessment made by means of the ROC methodology, taken as a reference. For patient images a similar approach was taken but using as reference the Visual Grading Analysis (VGA) method. A relationship between Sarnoff JNDmetrix and ROC results was established for low contrast detection in phantom images, demonstrating that the Sarnoff JNDmetrix can be used for qualification of images with highly correlated noise. Patient image qualification showed a threshold of conspicuity loss only for children over 35 kg.
How facial attractiveness affects sustained attention.
Li, Jie; Oksama, Lauri; Hyönä, Jukka
2016-10-01
The present study investigated whether and how facial attractiveness affects sustained attention. We adopted a multiple-identity tracking paradigm, using attractive and unattractive faces as stimuli. Participants were required to track moving target faces amid distractor faces and report the final location of each target. In Experiment 1, the attractive and unattractive faces differed in both the low-level properties (i.e., luminance, contrast, and color saturation) and high-level properties (i.e., physical beauty and age). The results showed that the attractiveness of both the target and distractor faces affected the tracking performance: The attractive target faces were tracked better than the unattractive target faces; when the targets and distractors were both unattractive male faces, the tracking performance was poorer than when they were of different attractiveness. In Experiment 2, the low-level properties of the facial images were equalized. The results showed that the attractive target faces were still tracked better than unattractive targets while the effects related to distractor attractiveness ceased to exist. Taken together, the results indicate that during attentional tracking the high-level properties related to the attractiveness of the target faces can be automatically processed, and then they can facilitate the sustained attention on the attractive targets, either with or without the supplement of low-level properties. On the other hand, only low-level properties of the distractor faces can be processed. When the distractors share similar low-level properties with the targets, they can be grouped together, so that it would be more difficult to sustain attention on the individual targets. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Uav Borne Low Altitude Photogrammetry System
NASA Astrophysics Data System (ADS)
Lin, Z.; Su, G.; Xie, F.
2012-07-01
In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.
High-performance image processing on the desktop
NASA Astrophysics Data System (ADS)
Jordan, Stephen D.
1996-04-01
The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.
NASA Astrophysics Data System (ADS)
Badshah, Amir; Choudhry, Aadil Jaleel; Ullah, Shan
2017-03-01
Industries are moving towards automation in order to increase productivity and ensure quality. Variety of electronic and electromagnetic systems are being employed to assist human operator in fast and accurate quality inspection of products. Majority of these systems are equipped with cameras and rely on diverse image processing algorithms. Information is lost in 2D image, therefore acquiring accurate 3D data from 2D images is an open issue. FAST, SURF and SIFT are well-known spatial domain techniques for features extraction and henceforth image registration to find correspondence between images. The efficiency of these methods is measured in terms of the number of perfect matches found. A novel fast and robust technique for stereo-image processing is proposed. It is based on non-rigid registration using modified normalized phase correlation. The proposed method registers two images in hierarchical fashion using quad-tree structure. The registration process works through global to local level resulting in robust matches even in presence of blur and noise. The computed matches can further be utilized to determine disparity and depth for industrial product inspection. The same can be used in driver assistance systems. The preliminary tests on Middlebury dataset produced satisfactory results. The execution time for a 413 x 370 stereo-pair is 500ms approximately on a low cost DSP.
Ambrus, Géza Gergely; Dotzer, Maria; Schweinberger, Stefan R; Kovács, Gyula
2017-12-01
Transcranial magnetic stimulation (TMS) and neuroimaging studies suggest a role of the right occipital face area (rOFA) in early facial feature processing. However, the degree to which rOFA is necessary for the encoding of facial identity has been less clear. Here we used a state-dependent TMS paradigm, where stimulation preferentially facilitates attributes encoded by less active neural populations, to investigate the role of the rOFA in face perception and specifically in image-independent identity processing. Participants performed a familiarity decision task for famous and unknown target faces, preceded by brief (200 ms) or longer (3500 ms) exposures to primes which were either an image of a different identity (DiffID), another image of the same identity (SameID), the same image (SameIMG), or a Fourier-randomized noise pattern (NOISE) while either the rOFA or the vertex as control was stimulated by single-pulse TMS. Strikingly, TMS to the rOFA eliminated the advantage of SameID over DiffID condition, thereby disrupting identity-specific priming, while leaving image-specific priming (better performance for SameIMG vs. SameID) unaffected. Our results suggest that the role of rOFA is not limited to low-level feature processing, and emphasize its role in image-independent facial identity processing and the formation of identity-specific memory traces.
Low-count PET image restoration using sparse representation
NASA Astrophysics Data System (ADS)
Li, Tao; Jiang, Changhui; Gao, Juan; Yang, Yongfeng; Liang, Dong; Liu, Xin; Zheng, Hairong; Hu, Zhanli
2018-04-01
In the field of positron emission tomography (PET), reconstructed images are often blurry and contain noise. These problems are primarily caused by the low resolution of projection data. Solving this problem by improving hardware is an expensive solution, and therefore, we attempted to develop a solution based on optimizing several related algorithms in both the reconstruction and image post-processing domains. As sparse technology is widely used, sparse prediction is increasingly applied to solve this problem. In this paper, we propose a new sparse method to process low-resolution PET images. Two dictionaries (D1 for low-resolution PET images and D2 for high-resolution PET images) are learned from a group real PET image data sets. Among these two dictionaries, D1 is used to obtain a sparse representation for each patch of the input PET image. Then, a high-resolution PET image is generated from this sparse representation using D2. Experimental results indicate that the proposed method exhibits a stable and superior ability to enhance image resolution and recover image details. Quantitatively, this method achieves better performance than traditional methods. This proposed strategy is a new and efficient approach for improving the quality of PET images.
Multiscale vector fields for image pattern recognition
NASA Technical Reports Server (NTRS)
Low, Kah-Chan; Coggins, James M.
1990-01-01
A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.
NASA Astrophysics Data System (ADS)
Ramirez, Andres; Rahnemoonfar, Maryam
2017-04-01
A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.
Natural image statistics mediate brightness 'filling in'.
Dakin, Steven C; Bex, Peter J
2003-11-22
Although the human visual system can accurately estimate the reflectance (or lightness) of surfaces under enormous variations in illumination, two equiluminant grey regions can be induced to appear quite different simply by placing a light-dark luminance transition between them. This illusion, the Craik-Cornsweet-O'Brien (CCOB) effect, has been taken as evidence for a low-level 'filling-in' mechanism subserving lightness perception. Here, we present evidence that the mechanism responsible for the CCOB effect operates not via propagation of a neural signal across space but by amplification of the low spatial frequency (SF) structure of the image. We develop a simple computational model that relies on the statistics of natural scenes actively to reconstruct the image that is most likely to have caused an observed series of responses across SF channels. This principle is tested psychophysically by deriving classification images (CIs) for subjects' discrimination of the contrast polarity of CCOB stimuli masked with noise. CIs resemble 'filled-in' stimuli; i.e. observers rely on portions of the stimuli that contain no information per se but that correspond closely to the reported perceptual completion. As predicted by the model, the filling-in process is contingent on the presence of appropriate low SF structure.
Parametric dense stereovision implementation on a system-on chip (SoC).
Gardel, Alfredo; Montejo, Pablo; García, Jorge; Bravo, Ignacio; Lázaro, José L
2012-01-01
This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time.
NASA Astrophysics Data System (ADS)
Velicu, S.; Buurma, C.; Bergeson, J. D.; Kim, Tae Sung; Kubby, J.; Gupta, N.
2014-05-01
Imaging spectrometry can be utilized in the midwave infrared (MWIR) and long wave infrared (LWIR) bands to detect, identify and map complex chemical agents based on their rotational and vibrational emission spectra. Hyperspectral datasets are typically obtained using grating or Fourier transform spectrometers to separate the incoming light into spectral bands. At present, these spectrometers are large, cumbersome, slow and expensive, and their resolution is limited by bulky mechanical components such as mirrors and gratings. As such, low-cost, miniaturized imaging spectrometers are of great interest. Microfabrication of micro-electro-mechanicalsystems (MEMS)-based components opens the door for producing low-cost, reliable optical systems. We present here our work on developing a miniaturized IR imaging spectrometer by coupling a mercury cadmium telluride (HgCdTe)-based infrared focal plane array (FPA) with a MEMS-based Fabry-Perot filter (FPF). The two membranes are fabricated from silicon-oninsulator (SOI) wafers using bulk micromachining technology. The fixed membrane is a standard silicon membrane, fabricated using back etching processes. The movable membrane is implemented as an X-beam structure to improve mechanical stability. The geometries of the distributed Bragg reflector (DBR)-based tunable FPFs are modeled to achieve the desired spectral resolution and wavelength range. Additionally, acceptable fabrication tolerances are determined by modeling the spectral performance of the FPFs as a function of DBR surface roughness and membrane curvature. These fabrication non-idealities are then mitigated by developing an optimized DBR process flow yielding high-performance FPF cavities. Zinc Sulfide (ZnS) and Germanium (Ge) are chosen as the low and the high index materials, respectively, and are deposited using an electron beam process. Simulations are presented showing the impact of these changes and non-idealities in both a device and systems level.
Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.
Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang
2017-08-25
We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.
Wolf, Robert Christian; Walter, Henrik; Vasic, Nenad
2010-01-01
Using a parametric version of a modified item-recognition paradigm with three different load levels and by means of event-related functional magnetic resonance imaging, this study tested the hypothesis that cerebral activation associated with intratrial proactive interference (PI) during working memory retrieval is influenced by increased context processing. We found activation of left BA 45 during interference trials across all levels of cognitive processing, and left lateralized activation of the dorsolateral prefrontal cortex (DLPFC, BA 9/46) and the frontopolar cortex (FPC, BA 10) with increasing contextual load. Compared with high susceptibility to PI, low susceptibility was associated with activation of the left DLPFC. These results suggest that an intratrial PI effect can be modulated by increasing context processing of a transiently relevant stimulus set. Moreover, PI resolution associated with increasing context load involves multiple prefrontal regions including the ventro- and dorsolateral prefrontal cortex as well as frontopolar brain areas. Furthermore, low susceptibility to PI might be influenced by increased executive control exerted by the DLPFC.
Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong
2017-01-01
Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail. PMID:28303644
Transmission of digital images within the NTSC analog format
Nickel, George H.
2004-06-15
HDTV and NTSC compatible image communication is done in a single NTSC channel bandwidth. Luminance and chrominance image data of a scene to be transmitted is obtained. The image data is quantized and digitally encoded to form digital image data in HDTV transmission format having low-resolution terms and high-resolution terms. The low-resolution digital image data terms are transformed to a voltage signal corresponding to NTSC color subcarrier modulation with retrace blanking and color bursts to form a NTSC video signal. The NTSC video signal and the high-resolution digital image data terms are then transmitted in a composite NTSC video transmission. In a NTSC receiver, the NTSC video signal is processed directly to display the scene. In a HDTV receiver, the NTSC video signal is processed to invert the color subcarrier modulation to recover the low-resolution terms, where the recovered low-resolution terms are combined with the high-resolution terms to reconstruct the scene in a high definition format.
Perceptual load-dependent neural correlates of distractor interference inhibition.
Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M; Potenza, Marc N
2011-01-18
The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load.
Excimer laser processing of backside-illuminated CCDS
NASA Technical Reports Server (NTRS)
Russell, S. D.
1993-01-01
An excimer laser is used to activate previously implanted dopants on the backside of a backside-illuminated CCD. The controlled ion implantation of the backside and subsequent thin layer heating and recrystallization by the short wavelength pulsed excimer laser simultaneously activates the dopant and anneals out implant damage. This improves the dark current response, repairs defective pixels and improves spectral response. This process heats a very thin layer of the material to high temperatures on a nanosecond time scale while the bulk of the delicate CCD substrate remains at low temperature. Excimer laser processing backside-illuminated CCD's enables salvage and utilization of otherwise nonfunctional components by bringing their dark current response to within an acceptable range. This process is particularly useful for solid state imaging detectors used in commercial, scientific and government applications requiring a wide spectral response and low light level detection.
Singleton, Clarence J; Ashwin, Chris; Brosnan, Mark
2014-12-01
Researchers have suggested that the two primary cognitive features of autism spectrum disorder (ASD), a drive toward nonsocial processing and a reduced drive toward social processing, may be unrelated to each other in the neurotypical (NT) population and may therefore require separate explanations. Drive toward types of processing may be related to physiological arousal to categories of stimuli, such as social (e.g., faces) or nonsocial (e.g., trains). This study investigated how autistic traits in an NT population might relate to differences in physiological responses to nonsocial compared with social stimuli. NT participants were recruited to examine these differences in those with high vs. low degrees of ASD traits. Forty-six participants (21 male, 25 female) completed the Autism Spectrum Quotient (AQ) to measure ASD traits before viewing a series of 24 images while skin conductance response (SCR) was recorded. Images included six nonsocial, six social, six face-like cartoons, and six nonsocial (relating to participants' personal interests). Analysis revealed that those with a higher AQ had significantly greater SCR arousal to nonsocial stimuli than those with a low AQ, and the higher the AQ, the greater the difference between SCR arousal to nonsocial and social stimuli. This is the first study to identify the relationship between AQ and physiological response to nonsocial stimuli, and a relationship between physiological response to both social and nonsocial stimuli, suggesting that physiological response may underlie the atypical drive toward nonsocial processing seen in ASD, and that at the physiological level at least the social and nonsocial in ASD may be related to one another. © 2014 International Society for Autism Research, Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Baltazart, Vincent; Moliard, Jean-Marc; Amhaz, Rabih; Wright, Dean; Jethwa, Manish
2015-04-01
Monitoring road surface conditions is an important issue in many countries. Several projects have looked into this issue in recent years, including TRIMM 2011-2014. The objective of such projects has been to detect surface distresses, like cracking, raveling and water ponding, in order to plan effective road maintenance and to afford a better sustainability of the pavement. The monitoring of cracking conventionally focuses on open cracks on the surface of the pavement, as opposed to reflexive cracks embedded in the pavement materials. For monitoring surface condition, in situ human visual inspection has been gradually replaced by automatic image data collection at traffic speed. Off-line image processing techniques have been developed for monitoring surface condition in support of human visual control. Full automation of crack monitoring has been approached with caution, and depends on a proper manual assessment of the performance. This work firstly presents some aspects of the current state of monitoring that have been reported so far in the literature and in previous projects: imaging technology and image processing techniques. Then, the work presents the two image processing techniques that have been developed within the scope of the TRIMM project to automatically detect pavement cracking from images. The first technique is a heuristic approach (HA) based on the search for gradient within the image. It was originally developed to process pavement images from the French imaging device, Aigle-RN. The second technique, the Minimal Path Selection (MPS) method, has been developed within an ongoing PhD work at IFSTTAR. The proposed new technique provides a fine and accurate segmentation of the crack pattern along with the estimation of the crack width. HA has been assessed against the field data collection provided by Yotta and TRL with the imaging device Tempest 2. The performance assessment has been threefold: first it was performed against the reference data set including 130 km of pavement images over UK roads, second over a few selected short sections of contiguous pavement images, and finally over a few sample images as a case study. The performance of MPS has been assessed against an older image data base. Pixel-based PGT was available to provide the most sensitive performance assessment. MPS has shown its ability to provide a very accurate cracking pattern without reducing the image resolution on the segmented images. Thus, it allows measurement of the crack width; it is found to behave more robustly against the image texture and better matched for dealing with low contrast pavement images. The benchmarking of seven automatic segmentation techniques has been provided at both the pixel and the grid levels. The performance assessment includes three minimal path selection algorithms, namely MPS, Free Form Anisotropy (FFA), one geodesic contour with automatic selection of points of interests (GC-POI), HA, and two Markov-based methods. Among others, MPS approach reached the best performance at the pixel level while it is matched to the FFA approach at the grid level. Finally, the project has emphasized the need for a reliable ground truth data collection. Owing to its accuracy, MPS may serve as a reference benchmark for other methods to provide the automatic segmentation of pavement images at the pixel level and beyond. As a counterpart, MPS requires a reduction in the computing time. Keywords: cracking, automatic segmentation, image processing, pavement, surface distress, monitoring, DICE, performance
A memory learning framework for effective image retrieval.
Han, Junwei; Ngan, King N; Li, Mingjing; Zhang, Hong-Jiang
2005-04-01
Most current content-based image retrieval systems are still incapable of providing users with their desired results. The major difficulty lies in the gap between low-level image features and high-level image semantics. To address the problem, this study reports a framework for effective image retrieval by employing a novel idea of memory learning. It forms a knowledge memory model to store the semantic information by simply accumulating user-provided interactions. A learning strategy is then applied to predict the semantic relationships among images according to the memorized knowledge. Image queries are finally performed based on a seamless combination of low-level features and learned semantics. One important advantage of our framework is its ability to efficiently annotate images and also propagate the keyword annotation from the labeled images to unlabeled images. The presented algorithm has been integrated into a practical image retrieval system. Experiments on a collection of 10,000 general-purpose images demonstrate the effectiveness of the proposed framework.
Amplitude image processing by diffractive optics.
Cagigal, Manuel P; Valle, Pedro J; Canales, V F
2016-02-22
In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.
Fluorescent Microscopy Enhancement Using Imaging
NASA Astrophysics Data System (ADS)
Conrad, Morgan P.; Reck tenwald, Diether J.; Woodhouse, Bryan S.
1986-06-01
To enhance our capabilities for observing fluorescent stains in biological systems, we are developing a low cost imaging system based around an IBM AT microcomputer and a commercial image capture board compatible with a standard RS-170 format video camera. The image is digitized in real time with 256 grey levels, while being displayed and also stored in memory. The software allows for interactive processing of the data, such as histogram equalization or pseudocolor enhancement of the display. The entire image, or a quadrant thereof, can be averaged over time to improve the signal to noise ratio. Images may be stored to disk for later use or comparison. The camera may be selected for better response in the UV or near IR. Combined with signal averaging, this increases the sensitivity relative to that of the human eye, while still allowing for the fluorescence distribution on either the surface or internal cytoskeletal structure to be observed.
Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor
Hirvonen, Liisa M.; Suhling, Klaus
2016-01-01
Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556
Villa, Carlo E.; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe
2010-01-01
The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done. PMID:20808918
Villa, Carlo E; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe
2010-08-17
The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done.
Cnn Based Retinal Image Upscaling Using Zero Component Analysis
NASA Astrophysics Data System (ADS)
Nasonov, A.; Chesnakov, K.; Krylov, A.
2017-05-01
The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.
Multi-Resolution Analysis of MODIS and ASTER Satellite Data for Water Classification
2006-09-01
spectral bands, but also with different pixel resolutions . The overall goal... the total water surface. Due to the constraint that high spatial resolution satellite images are low temporal resolution , one needs a reliable method...at 15 m resolution , were processed. We used MODIS reflectance data from MOD02 Level 1B data. Even the spatial resolution of the 1240 nm
Discriminative feature representation: an effective postprocessing solution to low dose CT imaging
NASA Astrophysics Data System (ADS)
Chen, Yang; Liu, Jin; Hu, Yining; Yang, Jian; Shi, Luyao; Shu, Huazhong; Gui, Zhiguo; Coatrieux, Gouenou; Luo, Limin
2017-03-01
This paper proposes a concise and effective approach termed discriminative feature representation (DFR) for low dose computerized tomography (LDCT) image processing, which is currently a challenging problem in medical imaging field. This DFR method assumes LDCT images as the superposition of desirable high dose CT (HDCT) 3D features and undesirable noise-artifact 3D features (the combined term of noise and artifact features induced by low dose scan protocols), and the decomposed HDCT features are used to provide the processed LDCT images with higher quality. The target HDCT features are solved via the DFR algorithm using a featured dictionary composed by atoms representing HDCT features and noise-artifact features. In this study, the featured dictionary is efficiently built using physical phantom images collected from the same CT scanner as the target clinical LDCT images to process. The proposed DFR method also has good robustness in parameter setting for different CT scanner types. This DFR method can be directly applied to process DICOM formatted LDCT images, and has good applicability to current CT systems. Comparative experiments with abdomen LDCT data validate the good performance of the proposed approach. This research was supported by National Natural Science Foundation under grants (81370040, 81530060), the Fundamental Research Funds for the Central Universities, and the Qing Lan Project in Jiangsu Province.
Non-rigid CT/CBCT to CBCT registration for online external beam radiotherapy guidance
NASA Astrophysics Data System (ADS)
Zachiu, Cornel; de Senneville, Baudouin Denis; Tijssen, Rob H. N.; Kotte, Alexis N. T. J.; Houweling, Antonetta C.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; Moonen, Chrit T. W.; Ries, Mario
2018-01-01
Image-guided external beam radiotherapy (EBRT) allows radiation dose deposition with a high degree of accuracy and precision. Guidance is usually achieved by estimating the displacements, via image registration, between cone beam computed tomography (CBCT) and computed tomography (CT) images acquired at different stages of the therapy. The resulting displacements are then used to reposition the patient such that the location of the tumor at the time of treatment matches its position during planning. Moreover, ongoing research aims to use CBCT-CT image registration for online plan adaptation. However, CBCT images are usually acquired using a small number of x-ray projections and/or low beam intensities. This often leads to the images being subject to low contrast, low signal-to-noise ratio and artifacts, which ends-up hampering the image registration process. Previous studies addressed this by integrating additional image processing steps into the registration procedure. However, these steps are usually designed for particular image acquisition schemes, therefore limiting their use on a case-by-case basis. In the current study we address CT to CBCT and CBCT to CBCT registration by the means of the recently proposed EVolution registration algorithm. Contrary to previous approaches, EVolution does not require the integration of additional image processing steps in the registration scheme. Moreover, the algorithm requires a low number of input parameters, is easily parallelizable and provides an elastic deformation on a point-by-point basis. Results have shown that relative to a pure CT-based registration, the intrinsic artifacts present in typical CBCT images only have a sub-millimeter impact on the accuracy and precision of the estimated deformation. In addition, the algorithm has low computational requirements, which are compatible with online image-based guidance of EBRT treatments.
Afterimages are biased by top-down information.
Utz, Sandra; Carbon, Claus-Christian
2015-01-01
The afterimage illusion refers to a complementary colored image continuing to appear in the observer's vision after the exposure to the original image has ceased. It is assumed to be a phenomenon of the primary visual pathway, caused by overstimulation of photoreceptors of the retina. The aim of the present study was to investigate the nature of afterimage perceptions; mainly whether it is a mere physical, that is, low-level effect or whether it can be modulated by top-down processes, that is, high-level processes. Participants were first exposed to five either strongly female or male faces (Experiment 1), objects highly associated with female or male gender (Experiment 2) or female versus male names (Experiment 3), followed by a negativated image of a gender-neutral face which had to be fixated for 20s to elicit an afterimage. Participants had to rate their afterimages according to sexual dimorphism, showing that the afterimage of the gender-neutral face was perceived as significantly more female in the female priming condition compared with the male priming condition, independently of the priming quality (faces, objects, and names). Our results documented, in addition to previously presumed bottom-up mechanisms, a prominent influence of top-down processing on the perception of afterimages via priming mechanisms (female primes led to more female afterimage perception). © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Cline, Julia Elaine
2011-12-01
Ultra-high temperature deformation measurements are required to characterize the thermo-mechanical response of material systems for thermal protection systems for aerospace applications. The use of conventional surface-contacting strain measurement techniques is not practical in elevated temperature conditions. Technological advancements in digital imaging provide impetus to measure full-field displacement and determine strain fields with sub-pixel accuracy by image processing. In this work, an Instron electromechanical axial testing machine with a custom-designed high temperature gripping mechanism is used to apply quasi-static tensile loads to graphite specimens heated to 2000°F (1093°C). Specimen heating via Joule effect is achieved and maintained with a custom-designed temperature control system. Images are captured at monotonically increasing load levels throughout the test duration using an 18 megapixel Canon EOS Rebel T2i digital camera with a modified Schneider Kreutznach telecentric lens and a combination of blue light illumination and narrow band-pass filter system. Images are processed using an open-source Matlab-based digital image correlation (DIC) code. Validation of source code is performed using Mathematica generated images with specified known displacement fields in order to gain confidence in accurate software tracking capabilities. Room temperature results are compared with extensometer readings. Ultra-high temperature strain measurements for graphite are obtained at low load levels, demonstrating the potential for non-contacting digital image correlation techniques to accurately determine full-field strain measurements at ultra-high temperature. Recommendations are given to improve the experimental set-up to achieve displacement field measurements accurate to 1/10 pixel and strain field accuracy of less than 2%.
Hyperspectral image denoising and anomaly detection based on low-rank and sparse representations
NASA Astrophysics Data System (ADS)
Zhuang, Lina; Gao, Lianru; Zhang, Bing; Bioucas-Dias, José M.
2017-10-01
The very high spectral resolution of Hyperspectral Images (HSIs) enables the identification of materials with subtle differences and the extraction subpixel information. However, the increasing of spectral resolution often implies an increasing in the noise linked with the image formation process. This degradation mechanism limits the quality of extracted information and its potential applications. Since HSIs represent natural scenes and their spectral channels are highly correlated, they are characterized by a high level of self-similarity and are well approximated by low-rank representations. These characteristic underlies the state-of-the-art in HSI denoising. However, in presence of rare pixels, the denoising performance of those methods is not optimal and, in addition, it may compromise the future detection of those pixels. To address these hurdles, we introduce RhyDe (Robust hyperspectral Denoising), a powerful HSI denoiser, which implements explicit low-rank representation, promotes self-similarity, and, by using a form of collaborative sparsity, preserves rare pixels. The denoising and detection effectiveness of the proposed robust HSI denoiser is illustrated using semi-real data.
FAST: framework for heterogeneous medical image computing and visualization.
Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-11-01
Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.
Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L
2018-07-01
To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (p< 0.0001). Processing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (p<0.0001). Visual grading analysis (VGA) identified a detector air kerma/image (DAK/image) of ~2.4 μGy as an ideal working point for NIP, DXD and DRX detectors. VGAS tracked IQ differences between detectors and dose levels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.
The reliability of continuous brain responses during naturalistic listening to music.
Burunat, Iballa; Toiviainen, Petri; Alluri, Vinoo; Bogert, Brigitte; Ristaniemi, Tapani; Sams, Mikko; Brattico, Elvira
2016-01-01
Low-level (timbral) and high-level (tonal and rhythmical) musical features during continuous listening to music, studied by functional magnetic resonance imaging (fMRI), have been shown to elicit large-scale responses in cognitive, motor, and limbic brain networks. Using a similar methodological approach and a similar group of participants, we aimed to study the replicability of previous findings. Participants' fMRI responses during continuous listening of a tango Nuevo piece were correlated voxelwise against the time series of a set of perceptually validated musical features computationally extracted from the music. The replicability of previous results and the present study was assessed by two approaches: (a) correlating the respective activation maps, and (b) computing the overlap of active voxels between datasets at variable levels of ranked significance. Activity elicited by timbral features was better replicable than activity elicited by tonal and rhythmical ones. These results indicate more reliable processing mechanisms for low-level musical features as compared to more high-level features. The processing of such high-level features is probably more sensitive to the state and traits of the listeners, as well as of their background in music. Copyright © 2015 Elsevier Inc. All rights reserved.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
NASA Astrophysics Data System (ADS)
Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian
2016-06-01
Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) (0.5 m spatial resolution), constrained by the SELENE and LRO Elevation Model (SLDEM 2015) of 60 m spatial resolution. The results indicate that local details are largely recovered by the algorithm while low frequency topographic consistency is affected by the low-resolution DEM.
NASA Astrophysics Data System (ADS)
Linares, Rodrigo; Vergara, German; Gutiérrez, Raúl; Fernández, Carlos; Villamayor, Víctor; Gómez, Luis; González-Camino, Maria; Baldasano, Arturo; Castro, G.; Arias, R.; Lapido, Y.; Rodríguez, J.; Romero, Pablo
2015-05-01
The combination of flexibility, productivity, precision and zero-defect manufacturing in future laser-based equipment are a major challenge that faces this enabling technology. New sensors for online monitoring and real-time control of laserbased processes are necessary for improving products quality and increasing manufacture yields. New approaches to fully automate processes towards zero-defect manufacturing demand smarter heads where lasers, optics, actuators, sensors and electronics will be integrated in a unique compact and affordable device. Many defects arising in laser-based manufacturing processes come from instabilities in the dynamics of the laser process. Temperature and heat dynamics are key parameters to be monitored. Low cost infrared imagers with high-speed of response will constitute the next generation of sensors to be implemented in future monitoring and control systems for laser-based processes, capable to provide simultaneous information about heat dynamics and spatial distribution. This work describes the result of using an innovative low-cost high-speed infrared imager based on the first quantum infrared imager monolithically integrated with Si-CMOS ROIC of the market. The sensor is able to provide low resolution images at frame rates up to 10 KHz in uncooled operation at the same cost as traditional infrared spot detectors. In order to demonstrate the capabilities of the new sensor technology, a low-cost camera was assembled on a standard production laser welding head, allowing to register melting pool images at frame rates of 10 kHz. In addition, a specific software was developed for defect detection and classification. Multiple laser welding processes were recorded with the aim to study the performance of the system and its application to the real-time monitoring of laser welding processes. During the experiments, different types of defects were produced and monitored. The classifier was fed with the experimental images obtained. Self-learning strategies were implemented with very promising results, demonstrating the feasibility of using low-cost high-speed infrared imagers in advancing towards a real-time / in-line zero-defect production systems.
Prabha, S; Suganthi, S S; Sujatha, C M
2015-01-01
Breast thermography is a potential imaging method for the early detection of breast cancer. The pathological conditions can be determined by measuring temperature variations in the abnormal breast regions. Accurate delineation of breast tissues is reported as a challenging task due to inherent limitations of infrared images such as low contrast, low signal to noise ratio and absence of clear edges. Segmentation technique is attempted to delineate the breast tissues by detecting proper lower breast boundaries and inframammary folds. Characteristic features are extracted to analyze the asymmetrical thermal variations in normal and abnormal breast tissues. An automated analysis of thermal variations of breast tissues is attempted using nonlinear adaptive level sets and Riesz transform. Breast thermal images are initially subjected to Stein's unbiased risk estimate based orthonormal wavelet denoising. These denoised images are enhanced using contrast-limited adaptive histogram equalization method. The breast tissues are then segmented using non-linear adaptive level set method. The phase map of enhanced image is integrated into the level set framework for final boundary estimation. The segmented results are validated against the corresponding ground truth images using overlap and regional similarity metrics. The segmented images are further processed with Riesz transform and structural texture features are derived from the transformed coefficients to analyze pathological conditions of breast tissues. Results show that the estimated average signal to noise ratio of denoised images and average sharpness of enhanced images are improved by 38% and 6% respectively. The interscale consideration adopted in the denoising algorithm is able to improve signal to noise ratio by preserving edges. The proposed segmentation framework could delineate the breast tissues with high degree of correlation (97%) between the segmented and ground truth areas. Also, the average segmentation accuracy and sensitivity are found to be 98%. Similarly, the maximum regional overlap between segmented and ground truth images obtained using volume similarity measure is observed to be 99%. Directionality as a feature, showed a considerable difference between normal and abnormal tissues which is found to be 11%. The proposed framework for breast thermal image analysis that is aided with necessary preprocessing is found to be useful in assisting the early diagnosis of breast abnormalities.
Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints
NASA Astrophysics Data System (ADS)
Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena
2012-04-01
Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an effective way for us to enhance the image quality at the matched regions between the prior and current images compared to the existing PICCS algorithm. Compared to the current CBCT imaging protocols, the APICCS algorithm allows an imaging dose reduction of 10-40 times due to the greatly reduced number of projections and lower x-ray tube current level coming from the low-dose protocol.
White-Light Optical Information Processing and Holography.
1982-05-03
artifact noise . I. wever, the deblurring spatial filter that we used were a narrow spectral band centered at 5154A green light. To compensate for the scaling...Processing, White-Light 11olographyv, Image Profcessing, Optical Signal Process inI, Image Subtraction, Image Deblurring . 70. A S’ R ACT (Continua on crow ad...optical processing technique, we had shown that the incoherent source techniques provides better image quality, and very low coherent artifact noise
Visual information processing of faces in body dysmorphic disorder.
Feusner, Jamie D; Townsend, Jennifer; Bystritsky, Alexander; Bookheimer, Susan
2007-12-01
Body dysmorphic disorder (BDD) is a severe psychiatric condition in which individuals are preoccupied with perceived appearance defects. Clinical observation suggests that patients with BDD focus on details of their appearance at the expense of configural elements. This study examines abnormalities in visual information processing in BDD that may underlie clinical symptoms. To determine whether patients with BDD have abnormal patterns of brain activation when visually processing others' faces with high, low, or normal spatial frequency information. Case-control study. University hospital. Twelve right-handed, medication-free subjects with BDD and 13 control subjects matched by age, sex, and educational achievement. Intervention Functional magnetic resonance imaging while performing matching tasks of face stimuli. Stimuli were neutral-expression photographs of others' faces that were unaltered, altered to include only high spatial frequency visual information, or altered to include only low spatial frequency visual information. Blood oxygen level-dependent functional magnetic resonance imaging signal changes in the BDD and control groups during tasks with each stimulus type. Subjects with BDD showed greater left hemisphere activity relative to controls, particularly in lateral prefrontal cortex and lateral temporal lobe regions for all face tasks (and dorsal anterior cingulate activity for the low spatial frequency task). Controls recruited left-sided prefrontal and dorsal anterior cingulate activity only for the high spatial frequency task. Subjects with BDD demonstrate fundamental differences from controls in visually processing others' faces. The predominance of left-sided activity for low spatial frequency and normal faces suggests detail encoding and analysis rather than holistic processing, a pattern evident in controls only for high spatial frequency faces. These abnormalities may be associated with apparent perceptual distortions in patients with BDD. The fact that these findings occurred while subjects viewed others' faces suggests differences in visual processing beyond distortions of their own appearance.
Age effects on visual-perceptual processing and confrontation naming.
Gutherie, Audrey H; Seely, Peter W; Beacham, Lauren A; Schuchard, Ronald A; De l'Aune, William A; Moore, Anna Bacon
2010-03-01
The impact of age-related changes in visual-perceptual processing on naming ability has not been reported. The present study investigated the effects of 6 levels of spatial frequency and 6 levels of contrast on accuracy and latency to name objects in 14 young and 13 older neurologically normal adults with intact lexical-semantic functioning. Spatial frequency and contrast manipulations were made independently. Consistent with the hypotheses, variations in these two visual parameters impact naming ability in young and older subjects differently. The results from the spatial frequency-manipulations revealed that, in general, young vs. older subjects are faster and more accurate to name. However, this age-related difference is dependent on the spatial frequency on the image; differences were only seen for images presented at low (e.g., 0.25-1 c/deg) or high (e.g., 8-16 c/deg) spatial frequencies. Contrary to predictions, the results from the contrast manipulations revealed that overall older vs. young adults are more accurate to name. Again, however, differences were only seen for images presented at the lower levels of contrast (i.e., 1.25%). Both age groups had shorter latencies on the second exposure of the contrast-manipulated images, but this possible advantage of exposure was not seen for spatial frequency. Category analyses conducted on the data from this study indicate that older vs. young adults exhibit a stronger nonliving-object advantage for naming spatial frequency-manipulated images. Moreover, the findings suggest that bottom-up visual-perceptual variables integrate with top-down category information in different ways. Potential implications on the aging and naming (and recognition) literature are discussed.
Influence of Emotion on the Control of Low-Level Force Production
ERIC Educational Resources Information Center
Naugle, Kelly M.; Coombes, Stephen A.; Cauraugh, James H.; Janelle, Christopher M.
2012-01-01
The accuracy and variability of a sustained low-level force contraction (2% of maximum voluntary contraction) was measured while participants viewed unpleasant, pleasant, and neutral images during a feedback occluded force control task. Exposure to pleasant and unpleasant images led to a relative increase in force production but did not alter the…
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT.
Schenk, Andreas D; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-05-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library and Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. Copyright © 2013 Elsevier Inc. All rights reserved.
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT
Schenk, Andreas D.; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-01-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library & Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. PMID:23500887
NASA Astrophysics Data System (ADS)
Zhang, Da; Li, Xinhua; Liu, Bob
2012-03-01
Since the introduction of ASiR, its potential in noise reduction has been reported in various clinical applications. However, the influence of different scan and reconstruction parameters on the trade off between ASiR's blurring effect and noise reduction in low contrast imaging has not been fully studied. Simple measurements on low contrast images, such as CNR or phantom scores could not explore the nuance nature of this problem. We tackled this topic using a method which compares the performance of ASiR in low contrast helical imaging based on an assumed filter layer on top of the FBP reconstruction. Transfer functions of this filter layer were obtained from the noise power spectra (NPS) of corresponding FBP and ASiR images that share the same scan and reconstruction parameters. 2D transfer functions were calculated as sqrt[NPSASiR(u, v)/NPSFBP(u, v)]. Synthesized ACR phantom images were generated by filtering the FBP images with the transfer functions of specific (FBP, ASiR) pairs, and were compared with the ASiR images. It is shown that the transfer functions could predict the deterministic blurring effect of ASiR on low contrast objects, as well as the degree of noise reductions. Using this method, the influence of dose, scan field of view (SFOV), display field of view (DFOV), ASiR level, and Recon Mode on the behavior of ASiR in low contrast imaging was studied. It was found that ASiR level, dose level, and DFOV play more important roles in determining the behavior of ASiR than the other two parameters.
High dynamic range CMOS-based mammography detector for FFDM and DBT
NASA Astrophysics Data System (ADS)
Peters, Inge M.; Smit, Chiel; Miller, James J.; Lomako, Andrey
2016-03-01
Digital Breast Tomosynthesis (DBT) requires excellent image quality in a dynamic mode at very low dose levels while Full Field Digital Mammography (FFDM) is a static imaging modality that requires high saturation dose levels. These opposing requirements can only be met by a dynamic detector with a high dynamic range. This paper will discuss a wafer-scale CMOS-based mammography detector with 49.5 μm pixels and a CsI scintillator. Excellent image quality is obtained for FFDM as well as DBT applications, comparing favorably with a-Se detectors that dominate the X-ray mammography market today. The typical dynamic range of a mammography detector is not high enough to accommodate both the low noise and the high saturation dose requirements for DBT and FFDM applications, respectively. An approach based on gain switching does not provide the signal-to-noise benefits in the low-dose DBT conditions. The solution to this is to add frame summing functionality to the detector. In one X-ray pulse several image frames will be acquired and summed. The requirements to implement this into a detector are low noise levels, high frame rates and low lag performance, all of which are unique characteristics of CMOS detectors. Results are presented to prove that excellent image quality is achieved, using a single detector for both DBT as well as FFDM dose conditions. This method of frame summing gave the opportunity to optimize the detector noise and saturation level for DBT applications, to achieve high DQE level at low dose, without compromising the FFDM performance.
Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide
2018-06-01
Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.
Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors
Boukhayma, Assim; Peizerat, Arnaud; Enz, Christian
2016-01-01
This paper presents an overview of the read noise in CMOS image sensors (CISs) based on four-transistors (4T) pixels, column-level amplification and correlated multiple sampling. Starting from the input-referred noise analytical formula, process level optimizations, device choices and circuit techniques at the pixel and column level of the readout chain are derived and discussed. The noise reduction techniques that can be implemented at the column and pixel level are verified by transient noise simulations, measurement and results from recently-published low noise CIS. We show how recently-reported process refinement, leading to the reduction of the sense node capacitance, can be combined with an optimal in-pixel source follower design to reach a sub-0.3erms- read noise at room temperature. This paper also discusses the impact of technology scaling on the CIS read noise. It shows how designers can take advantage of scaling and how the Metal-Oxide-Semiconductor (MOS) transistor gate leakage tunneling current appears as a challenging limitation. For this purpose, both simulation results of the gate leakage current and 1/f noise data reported from different foundries and technology nodes are used.
NASA Astrophysics Data System (ADS)
Tian, J.; Krauß, T.; d'Angelo, P.
2017-05-01
Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.
Low-level image properties in facial expressions.
Menzel, Claudia; Redies, Christoph; Hayn-Leichsenring, Gregor U
2018-06-04
We studied low-level image properties of face photographs and analyzed whether they change with different emotional expressions displayed by an individual. Differences in image properties were measured in three databases that depicted a total of 167 individuals. Face images were used either in their original form, cut to a standard format or superimposed with a mask. Image properties analyzed were: brightness, redness, yellowness, contrast, spectral slope, overall power and relative power in low, medium and high spatial frequencies. Results showed that image properties differed significantly between expressions within each individual image set. Further, specific facial expressions corresponded to patterns of image properties that were consistent across all three databases. In order to experimentally validate our findings, we equalized the luminance histograms and spectral slopes of three images from a given individual who showed two expressions. Participants were significantly slower in matching the expression in an equalized compared to an original image triad. Thus, existing differences in these image properties (i.e., spectral slope, brightness or contrast) facilitate emotion detection in particular sets of face images. Copyright © 2018. Published by Elsevier B.V.
A coloured oil level indicator detection method based on simple linear iterative clustering
NASA Astrophysics Data System (ADS)
Liu, Tianli; Li, Dongsong; Jiao, Zhiming; Liang, Tao; Zhou, Hao; Yang, Guoqing
2017-12-01
A detection method of coloured oil level indicator is put forward. The method is applied to inspection robot in substation, which realized the automatic inspection and recognition of oil level indicator. Firstly, the detected image of the oil level indicator is collected, and the detected image is clustered and segmented to obtain the label matrix of the image. Secondly, the detection image is processed by colour space transformation, and the feature matrix of the image is obtained. Finally, the label matrix and feature matrix are used to locate and segment the detected image, and the upper edge of the recognized region is obtained. If the upper limb line exceeds the preset oil level threshold, the alarm will alert the station staff. Through the above-mentioned image processing, the inspection robot can independently recognize the oil level of the oil level indicator, and instead of manual inspection. It embodies the automatic and intelligent level of unattended operation.
NASA Astrophysics Data System (ADS)
Wang, Jing; Wang, Su; Li, Lihong; Fan, Yi; Lu, Hongbing; Liang, Zhengrong
2008-10-01
Computed tomography colonography (CTC) or CT-based virtual colonoscopy (VC) is an emerging tool for detection of colonic polyps. Compared to the conventional fiber-optic colonoscopy, VC has demonstrated the potential to become a mass screening modality in terms of safety, cost, and patient compliance. However, current CTC delivers excessive X-ray radiation to the patient during data acquisition. The radiation is a major concern for screening application of CTC. In this work, we performed a simulation study to demonstrate a possible ultra low-dose CT technique for VC. The ultra low-dose abdominal CT images were simulated by adding noise to the sinograms of the patient CTC images acquired with normal dose scans at 100 mA s levels. The simulated noisy sinogram or projection data were first processed by a Karhunen-Loeve domain penalized weighted least-squares (KL-PWLS) restoration method and then reconstructed by a filtered backprojection algorithm for the ultra low-dose CT images. The patient-specific virtual colon lumen was constructed and navigated by a VC system after electronic colon cleansing of the orally-tagged residue stool and fluid. By the KL-PWLS noise reduction, the colon lumen can successfully be constructed and the colonic polyp can be detected in an ultra low-dose level below 50 mA s. Polyp detection can be found more easily by the KL-PWLS noise reduction compared to the results using the conventional noise filters, such as Hanning filter. These promising results indicate the feasibility of an ultra low-dose CTC pipeline for colon screening with less-stressful bowel preparation by fecal tagging with oral contrast.
Brand, John; Johnson, Aaron P
2014-01-01
In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks.
Brand, John; Johnson, Aaron P.
2014-01-01
In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks. PMID:25520675
Adaptive nonlocal means filtering based on local noise level for CT denoising
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.
2014-01-15
Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analyticalmore » noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the shape and peak frequency of the noise power spectrum better than commercial smoothing kernels, and indicate that the spatial resolution at low contrast levels is not significantly degraded. Both the subjective evaluation using the ACR phantom and the objective evaluation on a low-contrast detection task using a CHO model observer demonstrate an improvement on low-contrast performance. The GPU implementation can process and transfer 300 slice images within 5 min. On patient data, the adaptive NLM algorithm provides more effective denoising of CT data throughout a volume than standard NLM, and may allow significant lowering of radiation dose. After a two week pilot study of lower dose CT urography and CT enterography exams, both GI and GU radiology groups elected to proceed with permanent implementation of adaptive NLM in their GI and GU CT practices. Conclusions: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with clinical workflow. The adaptive NLM algorithm provides effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Yang, Y; Young, L
Purpose: Radiomic texture features derived from the oncologic PET have recently been brought under intense investigation within the context of patient stratification and treatment outcome prediction in a variety of cancer types; however, their validity has not yet been examined. This work is aimed to validate radiomic PET texture metrics through the use of realistic simulations in the ground truth setting. Methods: Simulation of FDG-PET was conducted by applying the Zubal phantom as an attenuation map to the SimSET software package that employs Monte Carlo techniques to model the physical process of emission imaging. A total of 15 irregularly-shaped lesionsmore » featuring heterogeneous activity distribution were simulated. For each simulated lesion, 28 texture features in relation to the intensity histograms (GLIH), grey-level co-occurrence matrices (GLCOM), neighborhood difference matrices (GLNDM), and zone size matrices (GLZSM) were evaluated and compared with their respective values extracted from the ground truth activity map. Results: In reference to the values from the ground truth images, texture parameters appearing on the simulated data varied with a range of 0.73–3026.2% for GLIH-based, 0.02–100.1% for GLCOM-based, 1.11–173.8% for GLNDM-based, and 0.35–66.3% for GLZSM-based. For majority of the examined texture metrics (16/28), their values on the simulated data differed significantly from those from the ground truth images (P-value ranges from <0.0001 to 0.04). Features not exhibiting significant difference comprised of GLIH-based standard deviation, GLCO-based energy and entropy, GLND-based coarseness and contrast, and GLZS-based low gray-level zone emphasis, high gray-level zone emphasis, short zone low gray-level emphasis, long zone low gray-level emphasis, long zone high gray-level emphasis, and zone size nonuniformity. Conclusion: The extent to which PET imaging disturbs texture appearance is feature-dependent and could be substantial. It is thus advised that use of PET texture parameters for predictive and prognostic measurements in oncologic setting awaits further systematic and critical evaluation.« less
Autonomous target tracking of UAVs based on low-power neural network hardware
NASA Astrophysics Data System (ADS)
Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe
2014-05-01
Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.
A thermophone on porous polymeric substrate
NASA Astrophysics Data System (ADS)
Chitnis, G.; Kim, A.; Song, S. H.; Jessop, A. M.; Bolton, J. S.; Ziaie, B.
2012-07-01
In this Letter, we present a simple, low-temperature method for fabricating a wide-band (>80 kHz) thermo-acoustic sound generator on a porous polymeric substrate. We were able to achieve up to 80 dB of sound pressure level with an input power of 0.511 W. No significant surface temperature increase was observed in the device even at an input power level of 2.5 W. Wide-band ultrasonic performance, simplicity of structure, and scalability of the fabrication process make this device suitable for many ranging and imaging applications.
Sund, T; Olsen, J B
2006-09-01
To investigate whether sliding window adaptive histogram equalization (SWAHE) of digital mammograms improves the detection of simulated calcifications, as compared to images normalized by global histogram equalization (GHE). Direct digital mammograms were obtained from mammary tissue phantoms superimposed with different frames. Each frame was divided into forty squares by a wire mesh, and contained granular calcifications randomly positioned in about 50% of the squares. Three radiologists read the mammograms on a display monitor. They classified their confidence in the presence of microcalcifications in each square on a scale of 1 to 5. Images processed with GHE were first read and used as a reference. In a later session, the same images processed with SWAHE were read. The results were compared using ROC methodology. When the total areas AZ were compared, the results were completely equivocal. When comparing the high-specificity partial ROC area AZ,0.2 below false-positive fraction (FPF) 0.20, two of the three observers performed best with the images processed with SWAHE. The difference was not statistically significant. When the reader's confidence threshold in malignancy is set at a high level, increasing the contrast of mammograms with SWAHE may enhance the visibility of microcalcifications without adversely affecting the false-positive rate. When the reader's confidence threshold is set at a low level, the effect of SWAHE is an increase of false positives. Further investigation is needed to confirm the validity of the conclusions.
Lee, In-Seon; Preissl, Hubert; Giel, Katrin; Schag, Kathrin; Enck, Paul
2018-01-23
The food-related behavior of functional dyspepsia has been attracting more interest of late. This pilot study aims to provide evidence of the physiological, emotional, and attentional aspects of food processing in functional dyspepsia patients. The study was performed in 15 functional dyspepsia patients and 17 healthy controls after a standard breakfast. We measured autonomic nervous system activity using skin conductance response and heart rate variability, emotional response using facial electromyography, and visual attention using eyetracking during the visual stimuli of food/non-food images. In comparison to healthy controls, functional dyspepsia patients showed a greater craving for food, a decreased intake of food, more dyspeptic symptoms, lower pleasantness rating of food images (particularly of high fat), decreased low frequency/high frequency ratio of heart rate variability, and suppressed total processing time of food images. There were no significant differences of skin conductance response and facial electromyography data between groups. The results suggest that high level cognitive functions rather than autonomic and emotional mechanisms are more liable to function differently in functional dyspepsia patients. Abnormal dietary behavior, reduced subjective rating of pleasantness and visual attention to food should be considered as important pathophysiological characteristics in functional dyspepsia.
Harness That S.O.B.: Distributing Remote Sensing Analysis in a Small Office/Business
NASA Astrophysics Data System (ADS)
Kramer, J.; Combe, J.; McCord, T. B.
2009-12-01
Researchers in a small office/business (SOB) operate with limited funding, equipment, and software availability. To mitigate these issues, we developed a distributed computing framework that: 1) leverages open source software to implement functionality otherwise reliant on proprietary software and 2) harnesses the unused power of (semi-)idle office computers with mixed operating systems (OSes). This abstract outlines some reasons for the effort, its conceptual basis and implementation, and provides brief speedup results. The Multiple-Endmember Linear Spectral Unmixing Model (MELSUM)1 processes remote-sensing (hyper-)spectral images. The algorithm is computationally expensive, sometimes taking a full week or more for a 1 million pixel/100 wavelength image. Analysis of pixels is independent, so a large benefit can be gained from parallel processing techniques. Job concurrency is limited by the number of active processing units. MELSUM was originally written in the Interactive Data Language (IDL). Despite its multi-threading capabilities, an IDL instance executes on a single machine, and so concurrency is limited by the machine's number of central processing units (CPUs). Network distribution can access more CPUs to provide a greater speedup, while also taking advantage of (often) underutilized extant equipment. appropriately integrating open source software magnifies the impact by avoiding the purchase of additional licenses. Our method of distribution breaks into four conceptual parts: 1) the top- or task-level user interface; 2) a mid-level program that manages hosts and jobs, called the distribution server; 3) a low-level executable for individual pixel calculations; and 4) a control program to synchronize sequential sub-tasks. Each part is a separate OS process, passing information via shell commands and/or temporary files. While the control and low-level executables are short-lived, the top-level program and distribution server run (at least) for the entirety of a task. While any language that supports "spawning" of OS processes can serve as the top-level interface, our solution, d-MELSUM, has been integrated with the IDL code. Doing so extracts the core calculating from IDL, but otherwise preserves IDL features and functionality. The distribution server is an extension of ADE2 mobile robot software, written in Java. Network connections rely on a secure shell (SSH) implementation, whether natively available (e.g., Linux or OS X) or user installed (e.g., OpenSSH available via Cygwin on Windows). Both the low-level and control programs are relatively small C++ programs (~54K, or 1500 lines, total) that were developed in-house, and use GNU's g++ compiler. The low-level code also relies on Linear Algebra PACKage (LAPACK) libraries for pixel calculations. Despite performance being contingent on data size, CPU speed, and network communication rate and latency to some degree, results have generally demonstrated a time reduction of a factor proportional to the number of open connections (one per CPU). For example, the task mentioned above requiring a week to process took 18 hours with d-MELSUM, using 10 CPUs on 2 computers. 1 J.-Ph Combe, et al., PSS 56, 2008. 2 J. Kramer and M. Scheutz, IROS2006, 2006.
Performance of InGaAs short wave infrared avalanche photodetector for low flux imaging
NASA Astrophysics Data System (ADS)
Singh, Anand; Pal, Ravinder
2017-11-01
Opto-electronic performance of the InGaAs/i-InGaAs/InP short wavelength infrared focal plane array suitable for high resolution imaging under low flux conditions and ranging is presented. More than 85% quantum efficiency is achieved in the optimized detector structure. Isotropic nature of the wet etching process poses a challenge in maintaining the required control in the small pitch high density detector array. Etching process is developed to achieve low dark current density of 1 nA/cm2 in the detector array with 25 µm pitch at 298 K. Noise equivalent photon performance less than one is achievable showing single photon detection capability. The reported photodiode with low photon flux is suitable for active cum passive imaging, optical information processing and quantum computing applications.
Sensor and computing resource management for a small satellite
NASA Astrophysics Data System (ADS)
Bhatia, Abhilasha; Goehner, Kyle; Sand, John; Straub, Jeremy; Mohammad, Atif; Korvald, Christoffer; Nervold, Anders Kose
A small satellite in a low-Earth orbit (e.g., approximately a 300 to 400 km altitude) has an orbital velocity in the range of 8.5 km/s and completes an orbit approximately every 90 minutes. For a satellite with minimal attitude control, this presents a significant challenge in obtaining multiple images of a target region. Presuming an inclination in the range of 50 to 65 degrees, a limited number of opportunities to image a given target or communicate with a given ground station are available, over the course of a 24-hour period. For imaging needs (where solar illumination is required), the number of opportunities is further reduced. Given these short windows of opportunity for imaging, data transfer, and sending commands, scheduling must be optimized. In addition to the high-level scheduling performed for spacecraft operations, payload-level scheduling is also required. The mission requires that images be post-processed to maximize spatial resolution and minimize data transfer (through removing overlapping regions). The payload unit includes GPS and inertial measurement unit (IMU) hardware to aid in image alignment for the aforementioned. The payload scheduler must, thus, split its energy and computing-cycle budgets between determining an imaging sequence (required to capture the highly-overlapping data required for super-resolution and adjacent areas required for mosaicking), processing the imagery (to perform the super-resolution and mosaicking) and preparing the data for transmission (compressing it, etc.). This paper presents an approach for satellite control, scheduling and operations that allows the cameras, GPS and IMU to be used in conjunction to acquire higher-resolution imagery of a target region.
High Tech Aids Low Vision: A Review of Image Processing for the Visually Impaired.
Moshtael, Howard; Aslam, Tariq; Underwood, Ian; Dhillon, Baljean
2015-08-01
Recent advances in digital image processing provide promising methods for maximizing the residual vision of the visually impaired. This paper seeks to introduce this field to the readership and describe its current state as found in the literature. A systematic search revealed 37 studies that measure the value of image processing techniques for subjects with low vision. The techniques used are categorized according to their effect and the principal findings are summarized. The majority of participants preferred enhanced images over the original for a wide range of enhancement types. Adapting the contrast and spatial frequency content often improved performance at object recognition and reading speed, as did techniques that attenuate the image background and a technique that induced jitter. A lack of consistency in preference and performance measures was found, as well as a lack of independent studies. Nevertheless, the promising results should encourage further research in order to allow their widespread use in low-vision aids.
Kriete, A; Schäffer, R; Harms, H; Aus, H M
1987-06-01
Nuclei of the cells from the thyroid gland were analyzed in a transmission electron microscope by direct TV scanning and on-line image processing. The method uses the advantages of a visual-perception model to detect structures in noisy and low-contrast images. The features analyzed include area, a form factor and texture parameters from the second derivative stage. Three tumor-free thyroid tissues, three follicular adenomas, three follicular carcinomas and three papillary carcinomas were studied. The computer-aided cytophotometric method showed that the most significant differences were the statistics of the chromatin texture features of homogeneity and regularity. These findings document the possibility of an automated differentiation of tumors at the ultrastructural level.
Corredor, Germán; Whitney, Jon; Arias, Viviana; Madabhushi, Anant; Romero, Eduardo
2017-01-01
Abstract. Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size 1024×1024 pixels were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an F-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively. PMID:28382314
McLoughlin, L C; Inder, S; Moran, D; O'Rourke, C; Manecksha, R P; Lynch, T H
2018-02-01
The diagnostic evaluation of a PSA recurrence after RP in the Irish hospital setting involves multimodality imaging with MRI, CT, and bone scanning, despite the low diagnostic yield from imaging at low PSA levels. We aim to investigate the value of multimodality imaging in PC patients after RP with a PSA recurrence. Forty-eight patients with a PSA recurrence after RP who underwent multimodality imaging were evaluated. Demographic data, postoperative PSA levels, and imaging studies performed at those levels were evaluated. Eight (21%) MRIs, 6 (33%) CTs, and 4 (9%) bone scans had PCa-specific findings. Three (12%) patients had a positive MRI with a PSA <1.0 ng/ml, while 5 (56%) were positive at PSA ≥1.1 ng/ml (p = 0.05). Zero patient had a positive CT TAP at a PSA level <1.0 ng/ml, while 5 (56%) were positive at levels ≥1.1 ng/ml (p = 0.03). Zero patient had a positive bone at PSA levels <1.0 ng/ml, while 4 (27%) were positive at levels ≥1.1 ng/ml (p = 0.01). The diagnostic yield from multimodality imaging, and isotope bone scanning in particular, in PSA levels <1.0 ng/ml, is low. There is a statistically significant increase in the frequency of positive findings on CT and bone scanning at PSA levels ≥1.1 ng/ml. MRI alone is of investigative value at PSA <1.0 ng/ml. The indication for CT, MRI, or isotope bone scanning should be carefully correlated with the clinical question and how it will affect further management.
Image-algebraic design of multispectral target recognition algorithms
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
1994-06-01
In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.
Edge co-occurrences can account for rapid categorization of natural versus animal images
NASA Astrophysics Data System (ADS)
Perrinet, Laurent U.; Bednar, James A.
2015-06-01
Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Aleksandr I.; Lazarev, Alexander A.; Magas, Taras E.
2010-04-01
Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix procedures with basic operations of continuous neurologic: normalized vector operations "equivalence", "nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width photoconverters have allowed us to design generalized structures for realization of the family of normalized linear vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading.
SU-F-I-41: Calibration-Free Material Decomposition for Dual-Energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, W; Xing, L; Zhang, Q
2016-06-15
Purpose: To eliminate tedious phantom calibration or manually region of interest (ROI) selection as required in dual-energy CT material decomposition, we establish a new projection-domain material decomposition framework with incorporation of energy spectrum. Methods: Similar to the case of dual-energy CT, the integral of the basis material image in our model is expressed as a linear combination of basis functions, which are the polynomials of high- and low-energy raw projection data. To yield the unknown coefficients of the linear combination, the proposed algorithm minimizes the quadratic error between the high- and low-energy raw projection data and the projection calculated usingmore » material images. We evaluate the algorithm with an iodine concentration numerical phantom at different dose and iodine concentration levels. The x-ray energy spectra of the high and low energy are estimated using an indirect transmission method. The derived monochromatic images are compared with the high- and low-energy CT images to demonstrate beam hardening artifacts reduction. Quantitative results were measured and compared to the true values. Results: The differences between the true density value used for simulation and that were obtained from the monochromatic images, are 1.8%, 1.3%, 2.3%, and 2.9% for the dose levels from standard dose to 1/8 dose, and are 0.4%, 0.7%, 1.5%, and 1.8% for the four iodine concentration levels from 6 mg/mL to 24 mg/mL. For all of the cases, beam hardening artifacts, especially streaks shown between dense inserts, are almost completely removed in the monochromatic images. Conclusion: The proposed algorithm provides an effective way to yield material images and artifacts-free monochromatic images at different dose levels without the need for phantom calibration or ROI selection. Furthermore, the approach also yields accurate results when the concentration of the iodine concentrate insert is very low, suggesting the algorithm is robust with respect to the low-contrast scenario.« less
Low-cost digital image processing at the University of Oklahoma
NASA Technical Reports Server (NTRS)
Harrington, J. A., Jr.
1981-01-01
Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.
Pérez, Alberto J; González-Peña, Rolando J; Braga, Roberto; Perles, Ángel; Pérez-Marín, Eva; García-Diego, Fernando J
2018-01-11
Dynamic laser speckle (DLS) is used as a reliable sensor of activity for all types of materials. Traditional applications are based on high-rate captures (usually greater than 10 frames-per-second, fps). Even for drying processes in conservation treatments, where there is a high level of activity in the first moments after the application and slower activity after some minutes or hours, the process is based on the acquisition of images at a time rate that is the same in moments of high and low activity. In this work, we present an alternative approach to track the drying process of protective layers and other painting conservation processes that take a long time to reduce their levels of activity. We illuminate, using three different wavelength lasers, a temporary protector (cyclododecane) and a varnish, and monitor them using a low fps rate during long-term drying. The results are compared to the traditional method. This work also presents a monitoring method that uses portable equipment. The results present the feasibility of using the portable device and show the improved sensitivity of the dynamic laser speckle when sensing the long-term process for drying cyclododecane and varnish in conservation.
Luo, Jiebo; Boutell, Matthew
2005-05-01
Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.
Volta phase plate data collection facilitates image processing and cryo-EM structure determination.
von Loeffelholz, Ottilie; Papai, Gabor; Danev, Radostin; Myasnikov, Alexander G; Natchiar, S Kundhavai; Hazemann, Isabelle; Ménétret, Jean-François; Klaholz, Bruno P
2018-06-01
A current bottleneck in structure determination of macromolecular complexes by cryo electron microscopy (cryo-EM) is the large amount of data needed to obtain high-resolution 3D reconstructions, including through sorting into different conformations and compositions with advanced image processing. Additionally, it may be difficult to visualize small ligands that bind in sub-stoichiometric levels. Volta phase plates (VPP) introduce a phase shift in the contrast transfer and drastically increase the contrast of the recorded low-dose cryo-EM images while preserving high frequency information. Here we present a comparative study to address the behavior of different data sets during image processing and quantify important parameters during structure refinement. The automated data collection was done from the same human ribosome sample either as a conventional defocus range dataset or with a Volta phase plate close to focus (cfVPP) or with a small defocus (dfVPP). The analysis of image processing parameters shows that dfVPP data behave more robustly during cryo-EM structure refinement because particle alignments, Euler angle assignments and 2D & 3D classifications behave more stably and converge faster. In particular, less particle images are required to reach the same resolution in the 3D reconstructions. Finally, we find that defocus range data collection is also applicable to VPP. This study shows that data processing and cryo-EM map interpretation, including atomic model refinement, are facilitated significantly by performing VPP cryo-EM, which will have an important impact on structural biology. Copyright © 2018 Elsevier Inc. All rights reserved.
Real-time optical image processing techniques
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang
1988-01-01
Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.
Ultrasound imaging based on nonlinear pressure field properties
NASA Astrophysics Data System (ADS)
Bouakaz, Ayache; Frinking, Peter J. A.; de Jong, Nico
2000-07-01
Ultrasound image quality has experienced a significant improvement over the past years with the utilization of harmonic frequencies. This brings the need to understand the physical processes involved in the propagation of finite amplitude sound beams, and the issues for redesigning and optimizing the phased array transducers. New arrays with higher imaging performances are essential for tissue imaging and contrast imaging as well. This study presents measurements and simulations on a 4.6 MHz square transducer. The numerical scheme used solves the KZK equation in the time domain. Comparison of measured and computed data showed good agreement for low and high excitation levels. In a similar way, a numerical simulation was performed on a linear array with five elements. The simulation showed that the second harmonic beam is narrower than the fundamental with less energy in the near field. In addition, the grating lobes are significantly lower. Accordingly, selective harmonic imaging shows less near field artifacts and will lower the clutter, resulting in much cleaner images.
Double-image storage optimized by cross-phase modulation in a cold atomic system
NASA Astrophysics Data System (ADS)
Qiu, Tianhui; Xie, Min
2017-09-01
A tripod-type cold atomic system driven by double-probe fields and a coupling field is explored to store double images based on the electromagnetically induced transparency (EIT). During the storage time, an intensity-dependent signal field is applied further to extend the system with the fifth level involved, then the cross-phase modulation is introduced for coherently manipulating the stored images. Both analytical analysis and numerical simulation clearly demonstrate a tunable phase shift with low nonlinear absorption can be imprinted on the stored images, which effectively can improve the visibility of the reconstructed images. The phase shift and the energy retrieving rate of the probe fields are immune to the coupling intensity and the atomic optical density. The proposed scheme can easily be extended to the simultaneous storage of multiple images. This work may be exploited toward the end of EIT-based multiple-image storage devices for all-optical classical and quantum information processings.
Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system
NASA Astrophysics Data System (ADS)
Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio
2017-03-01
It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.
A Survey of Memristive Threshold Logic Circuits.
Maan, Akshay Kumar; Jayadevi, Deepthi Anirudhan; James, Alex Pappachen
2017-08-01
In this paper, we review different memristive threshold logic (MTL) circuits that are inspired from the synaptic action of the flow of neurotransmitters in the biological brain. The brainlike generalization ability and the area minimization of these threshold logic circuits aim toward crossing Moore's law boundaries at device, circuits, and systems levels. Fast switching memory, signal processing, control systems, programmable logic, image processing, reconfigurable computing, and pattern recognition are identified as some of the potential applications of MTL systems. The physical realization of nanoscale devices with memristive behavior from materials, such as TiO 2 , ferroelectrics, silicon, and polymers, has accelerated research effort in these application areas, inspiring the scientific community to pursue the design of high-speed, low-cost, low-power, and high-density neuromorphic architectures.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Affective ERP Processing in a Visual Oddball Task: Arousal, Valence, and Gender
Rozenkrants, Bella; Polich, John
2008-01-01
Objective To assess affective event-related brain potentials (ERPs) using visual pictures that were highly distinct on arousal level/valence category ratings and a response task. Methods Images from the International Affective Pictures System (IAPS) were selected to obtain distinct affective arousal (low, high) and valence (negative, positive) rating levels. The pictures were used as target stimuli in an oddball paradigm, with a visual pattern as the standard stimulus. Participants were instructed to press a button whenever a picture occurred and to ignore the standard. Task performance and response time did not differ across conditions. Results High-arousal compared to low-arousal stimuli produced larger amplitudes for the N2, P3, early slow wave, and late slow wave components. Valence amplitude effects were weak overall and originated primarily from the later waveform components and interactions with electrode position. Gender differences were negligible. Conclusion The findings suggest that arousal level is the primary determinant of affective oddball processing, and valence minimally influences ERP amplitude. Significance Affective processing engages selective attentional mechanisms that are primarily sensitive to the arousal properties of emotional stimuli. The application and nature of task demands are important considerations for interpreting these effects. PMID:18783987
Star Formation in Dwarf-Dwarf Mergers: Fueling Hierarchical Assembly
NASA Astrophysics Data System (ADS)
Stierwalt, Sabrina; Johnson, K. E.; Kallivayalil, N.; Patton, D. R.; Putman, M. E.; Besla, G.; Geha, M. C.
2014-01-01
We present early results from the first systematic study a sample of isolated interacting dwarf pairs and the mechanisms governing their star formation. Low mass dwarf galaxies are ubiquitous in the local universe, yet the efficiency of gas removal and the enhancement of star formation in dwarfs via pre-processing (i.e. dwarf-dwarf interactions occurring before the accretion by a massive host) are currently unconstrained. Studies of Local Group dwarfs credit stochastic internal processes for their complicated star formation histories, but a few intriguing examples suggest interactions among dwarfs may produce enhanced star formation. We combine archival UV imaging from GALEX with deep optical broad- and narrow-band (Halpha) imaging taken with the pre- One Degree Imager (pODI) on the WIYN 3.5-m telescope and with the 2.3-m Bok telescope at Steward Observatory to confirm the presence of stellar bridges and tidal tails and to determine whether dwarf-dwarf interactions alone can trigger significant levels of star formation. We investigate star formation rates and global galaxy colors as a function of dwarf pair separation (i.e. the dwarf merger sequence) and dwarf-dwarf mass ratio. This project is a precursor to an ongoing effort to obtain high spatial resolution HI imaging to assess the importance of sequential triggering caused by dwarf-dwarf interactions and the subsequent affect on the more massive hosts that later accrete the low mass systems.
Perceptual Load-Dependent Neural Correlates of Distractor Interference Inhibition
Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M.; Potenza, Marc N.
2011-01-01
Background The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. Methodology/Principal Findings We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Conclusions Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load. PMID:21267080
Research-grade CMOS image sensors for remote sensing applications
NASA Astrophysics Data System (ADS)
Saint-Pe, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Martin-Gonthier, Philippe; Corbiere, Franck; Belliot, Pierre; Estribeau, Magali
2004-11-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding space applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this paper will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments and performances of CIS prototypes built using an imaging CMOS process will be presented in the corresponding section.
Monitoring Pest Insect Traps by Means of Low-Power Image Sensor Technologies
López, Otoniel; Rach, Miguel Martinez; Migallon, Hector; Malumbres, Manuel P.; Bonastre, Alberto; Serrano, Juan J.
2012-01-01
Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement). The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc.). PMID:23202232
Monitoring pest insect traps by means of low-power image sensor technologies.
López, Otoniel; Rach, Miguel Martinez; Migallon, Hector; Malumbres, Manuel P; Bonastre, Alberto; Serrano, Juan J
2012-11-13
Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement). The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc.).
Quantitative Aspects of Single Molecule Microscopy
Ober, Raimund J.; Tahmasbi, Amir; Ram, Sripad; Lin, Zhiping; Ward, E. Sally
2015-01-01
Single molecule microscopy is a relatively new optical microscopy technique that allows the detection of individual molecules such as proteins in a cellular context. This technique has generated significant interest among biologists, biophysicists and biochemists, as it holds the promise to provide novel insights into subcellular processes and structures that otherwise cannot be gained through traditional experimental approaches. Single molecule experiments place stringent demands on experimental and algorithmic tools due to the low signal levels and the presence of significant extraneous noise sources. Consequently, this has necessitated the use of advanced statistical signal and image processing techniques for the design and analysis of single molecule experiments. In this tutorial paper, we provide an overview of single molecule microscopy from early works to current applications and challenges. Specific emphasis will be on the quantitative aspects of this imaging modality, in particular single molecule localization and resolvability, which will be discussed from an information theoretic perspective. We review the stochastic framework for image formation, different types of estimation techniques and expressions for the Fisher information matrix. We also discuss several open problems in the field that demand highly non-trivial signal processing algorithms. PMID:26167102
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
Potential medical applications of TAE
NASA Technical Reports Server (NTRS)
Fahy, J. Ben; Kaucic, Robert; Kim, Yongmin
1986-01-01
In cooperation with scientists in the University of Washington Medical School, a microcomputer-based image processing system for quantitative microscopy, called DMD1 (Digital Microdensitometer 1) was constructed. In order to make DMD1 transportable to different hosts and image processors, we have been investigating the possibility of rewriting the lower level portions of DMD1 software using Transportable Applications Executive (TAE) libraries and subsystems. If successful, we hope to produce a newer version of DMD1, called DMD2, running on an IBM PC/AT under the SCO XENIX System 5 operating system, using any of seven target image processors available in our laboratory. Following this implementation, copies of the system will be transferred to other laboratories with biomedical imaging applications. By integrating those applications into DMD2, we hope to eventually expand our system into a low-cost general purpose biomedical imaging workstation. This workstation will be useful not only as a self-contained instrument for clinical or research applications, but also as part of a large scale Digital Imaging Network and Picture Archiving and Communication System, (DIN/PACS). Widespread application of these TAE-based image processing and analysis systems should facilitate software exchange and scientific cooperation not only within the medical community, but between the medical and remote sensing communities as well.
Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S
2012-02-23
We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.
NASA Astrophysics Data System (ADS)
Sirakov, Nikolay M.; Suh, Sang; Attardo, Salvatore
2011-06-01
This paper presents a further step of a research toward the development of a quick and accurate weapons identification methodology and system. A basic stage of this methodology is the automatic acquisition and updating of weapons ontology as a source of deriving high level weapons information. The present paper outlines the main ideas used to approach the goal. In the next stage, a clustering approach is suggested on the base of hierarchy of concepts. An inherent slot of every node of the proposed ontology is a low level features vector (LLFV), which facilitates the search through the ontology. Part of the LLFV is the information about the object's parts. To partition an object a new approach is presented capable of defining the objects concavities used to mark the end points of weapon parts, considered as convexities. Further an existing matching approach is optimized to determine whether an ontological object matches the objects from an input image. Objects from derived ontological clusters will be considered for the matching process. Image resizing is studied and applied to decrease the runtime of the matching approach and investigate its rotational and scaling invariance. Set of experiments are preformed to validate the theoretical concepts.
Surface conversion techniques for low energy neutral atom imagers
NASA Technical Reports Server (NTRS)
Quinn, J. M.
1995-01-01
This investigation has focused on development of key technology elements for low energy neutral atom imaging. More specifically, we have investigated the conversion of low energy neutral atoms to negatively charged ions upon reflection from specially prepared surfaces. This 'surface conversion' technique appears to offer a unique capability of detecting, and thus imaging, neutral atoms at energies of 0.01 - 1 keV with high enough efficiencies to make practical its application to low energy neutral atom imaging in space. Such imaging offers the opportunity to obtain the first instantaneous global maps of macroscopic plasma features and their temporal variation. Through previous in situ plasma measurements, we have a statistical picture of large scale morphology and local measurements of dynamic processes. However, with in situ techniques it is impossible to characterize or understand many of the global plasma transport and energization processes. A series of global plasma images would greatly advance our understanding of these processes and would provide the context for interpreting previous and future in situ measurements. Fast neutral atoms, created from ions that are neutralized in collisions with exospheric neutrals, offer the means for remotely imaging plasma populations. Energy and mass analysis of these neutrals provides critical information about the source plasma distribution. The flux of neutral atoms available for imaging depends upon a convolution of the ambient plasma distribution with the charge exchange cross section for the background neutral population. Some of the highest signals are at relatively low energies (well below 1 keV). This energy range also includes some of the most important plasma populations to be imaged, for example the base of the cleft ion fountain.
Wang, An-Li; Shi, Zhenhao; Fairchild, Victoria P; Aronowitz, Catherine A; Langleben, Daniel D
2018-04-26
Graphic warning labels (GWLs) on cigarette packages, that combine textual warnings with emotionally salient images depicting the adverse health consequences of smoking, have been adopted in most European countries. In the US, the courts deemed the evidence justifying the inclusion of emotionally salient images in GWLs insufficient and put the implementation on hold. We conducted a controlled experimental study examining the effect of emotional salience of GWL's images on the recall of their text component. Seventy-three non-treatment-seeking daily smokers received cigarette packs carrying GWLs for a period of 4 weeks. Participants were randomly assigned to receive packs with GWLs previously rated as eliciting high or low level of emotional reaction (ER). The two conditions differed in respect to images but used the same textual warning statements. Participants' recognition of GWL images and statements were tested separately at baseline and again after the 4-week repetitive exposure. Textual warning statements were recognized more accurately when paired with high ER images than when paired with low ER images, both at baseline and after daily exposure to GWLs over a 4-week period. The results suggest that emotional salience of GWLs facilitates cognitive processing of the textual warnings, resulting in better remembering of the information about the health hazards of smoking. Thus, high emotional salience of the pictorial component of GWLs is essential for their overall effectiveness.
Image processing system design for microcantilever-based optical readout infrared arrays
NASA Astrophysics Data System (ADS)
Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu
2012-12-01
Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.
Miss-distance indicator for tank main guns
NASA Astrophysics Data System (ADS)
Bornstein, Jonathan A.; Hillis, David B.
1996-06-01
Tank main gun systems must possess extremely high levels of accuracy to perform successfully in battle. Under some circumstances, the first round fired in an engagement may miss the intended target, and it becomes necessary to rapidly correct fire. A breadboard automatic miss-distance indicator system was previously developed to assist in this process. The system, which would be mounted on a 'wingman' tank, consists of a charged-coupled device (CCD) camera and computer-based image-processing system, coupled with a separate infrared sensor to detect muzzle flash. For the system to be successfully employed with current generation tanks, it must be reliable, be relatively low cost, and respond rapidly maintaining current firing rates. Recently, the original indicator system was developed further in an effort to assist in achieving these goals. Efforts have focused primarily upon enhanced image-processing algorithms, both to improve system reliability and to reduce processing requirements. Intelligent application of newly refined trajectory models has permitted examination of reduced areas of interest and enhanced rejection of false alarms, significantly improving system performance.
A CANDLE for a deeper in vivo insight
Coupé, Pierrick; Munz, Martin; Manjón, Jose V; Ruthazer, Edward S; Louis Collins, D.
2012-01-01
A new Collaborative Approach for eNhanced Denoising under Low-light Excitation (CANDLE) is introduced for the processing of 3D laser scanning multiphoton microscopy images. CANDLE is designed to be robust for low signal-to-noise ratio (SNR) conditions typically encountered when imaging deep in scattering biological specimens. Based on an optimized non-local means filter involving the comparison of filtered patches, CANDLE locally adapts the amount of smoothing in order to deal with the noise inhomogeneity inherent to laser scanning fluorescence microscopy images. An extensive validation on synthetic data, images acquired on microspheres and in vivo images is presented. These experiments show that the CANDLE filter obtained competitive results compared to a state-of-the-art method and a locally adaptive optimized nonlocal means filter, especially under low SNR conditions (PSNR<8dB). Finally, the deeper imaging capabilities enabled by the proposed filter are demonstrated on deep tissue in vivo images of neurons and fine axonal processes in the Xenopus tadpole brain. PMID:22341767
MRI Superresolution Using Self-Similarity and Image Priors
Manjón, José V.; Coupé, Pierrick; Buades, Antonio; Collins, D. Louis; Robles, Montserrat
2010-01-01
In Magnetic Resonance Imaging typical clinical settings, both low- and high-resolution images of different types are routinarily acquired. In some cases, the acquired low-resolution images have to be upsampled to match with other high-resolution images for posterior analysis or postprocessing such as registration or multimodal segmentation. However, classical interpolation techniques are not able to recover the high-frequency information lost during the acquisition process. In the present paper, a new superresolution method is proposed to reconstruct high-resolution images from the low-resolution ones using information from coplanar high resolution images acquired of the same subject. Furthermore, the reconstruction process is constrained to be physically plausible with the MR acquisition model that allows a meaningful interpretation of the results. Experiments on synthetic and real data are supplied to show the effectiveness of the proposed approach. A comparison with classical state-of-the-art interpolation techniques is presented to demonstrate the improved performance of the proposed methodology. PMID:21197094
Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
99m Technetium-methylene diphosphonate ( 99m Tc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99m Tc-MDP-bone scan images. A set of 89 low contrast 99m Tc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t -test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful.
Symmetric Phase Only Filtering for Improved DPIV Data Processing
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
2006-01-01
The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
Miyamoto, Naoki; Ishikawa, Masayori; Sutherland, Kenneth; Suzuki, Ryusuke; Matsuura, Taeko; Toramatsu, Chie; Takao, Seishin; Nihongi, Hideaki; Shimizu, Shinichi; Umegaki, Kikuo; Shirato, Hiroki
2015-01-01
In the real-time tumor-tracking radiotherapy system, a surrogate fiducial marker inserted in or near the tumor is detected by fluoroscopy to realize respiratory-gated radiotherapy. The imaging dose caused by fluoroscopy should be minimized. In this work, an image processing technique is proposed for tracing a moving marker in low-dose imaging. The proposed tracking technique is a combination of a motion-compensated recursive filter and template pattern matching. The proposed image filter can reduce motion artifacts resulting from the recursive process based on the determination of the region of interest for the next frame according to the current marker position in the fluoroscopic images. The effectiveness of the proposed technique and the expected clinical benefit were examined by phantom experimental studies with actual tumor trajectories generated from clinical patient data. It was demonstrated that the marker motion could be traced in low-dose imaging by applying the proposed algorithm with acceptable registration error and high pattern recognition score in all trajectories, although some trajectories were not able to be tracked with the conventional spatial filters or without image filters. The positional accuracy is expected to be kept within ±2 mm. The total computation time required to determine the marker position is a few milliseconds. The proposed image processing technique is applicable for imaging dose reduction. PMID:25129556
Using deep learning for content-based medical image retrieval
NASA Astrophysics Data System (ADS)
Sun, Qinpei; Yang, Yuanyuan; Sun, Jianyong; Yang, Zhiming; Zhang, Jianguo
2017-03-01
Content-Based medical image retrieval (CBMIR) is been highly active research area from past few years. The retrieval performance of a CBMIR system crucially depends on the feature representation, which have been extensively studied by researchers for decades. Although a variety of techniques have been proposed, it remains one of the most challenging problems in current CBMIR research, which is mainly due to the well-known "semantic gap" issue that exists between low-level image pixels captured by machines and high-level semantic concepts perceived by human[1]. Recent years have witnessed some important advances of new techniques in machine learning. One important breakthrough technique is known as "deep learning". Unlike conventional machine learning methods that are often using "shallow" architectures, deep learning mimics the human brain that is organized in a deep architecture and processes information through multiple stages of transformation and representation. This means that we do not need to spend enormous energy to extract features manually. In this presentation, we propose a novel framework which uses deep learning to retrieval the medical image to improve the accuracy and speed of a CBIR in integrated RIS/PACS.
NASA Astrophysics Data System (ADS)
Zender, J.; Berghmans, D.; Bloomfield, D. S.; Cabanas Parada, C.; Dammasch, I.; De Groof, A.; D'Huys, E.; Dominique, M.; Gallagher, P.; Giordanengo, B.; Higgins, P. A.; Hochedez, J.-F.; Yalim, M. S.; Nicula, B.; Pylyser, E.; Sanchez-Duarte, L.; Schwehm, G.; Seaton, D. B.; Stanger, A.; Stegen, K.; Willems, S.
2013-08-01
The PROBA2 Science Centre (P2SC) is a small-scale science operations centre supporting the Sun observation instruments onboard PROBA2: the EUV imager Sun Watcher using APS detectors and image Processing (SWAP) and Large-Yield Radiometer (LYRA). PROBA2 is one of ESA's small, low-cost Projects for Onboard Autonomy (PROBA) and part of ESA's In-Orbit Technology Demonstration Programme. The P2SC is hosted at the Royal Observatory of Belgium, co-located with both Principal Investigator teams. The P2SC tasks cover science planning, instrument commanding, instrument monitoring, data processing, support of outreach activities, and distribution of science data products. PROBA missions aim for a high degree of autonomy at mission and system level, including the science operations centre. The autonomy and flexibility of the P2SC is reached by a set of web-based interfaces allowing the operators as well as the instrument teams to monitor quasi-continuously the status of the operations, allowing a quick reaction to solar events. In addition, several new concepts are implemented at instrument, spacecraft, and ground-segment levels allowing a high degree of flexibility in the operations of the instruments. This article explains the key concepts of the P2SC, emphasising the automation and the flexibility achieved in the commanding as well as the data-processing chain.
Xia, Jingjing; Tsui, Po-Hsiang; Liu, Hao-Li
2016-01-01
Burst-mode focused ultrasound (FUS) exposure has been shown to induce transient blood-brain barrier (BBB) opening for potential CNS drug delivery. FUS-BBB opening requires imaging guidance during the intervention, yet current imaging technology only enables postoperative outcome confirmation. In this study, we propose an approach to visualize short-burst low-pressure focal beam distribution that allows to be applied in FUS-BBB opening intervention on small animals. A backscattered acoustic-wave reconstruction method based on synchronization among focused ultrasound emission, diagnostic ultrasound receiving and passively beamformed processing were developed. We observed that focal beam could be successfully visualized for in vitro FUS exposure with 0.5–2 MHz without involvement of microbubbles. The detectable level of FUS exposure was 0.467 MPa in pressure and 0.05 ms in burst length. The signal intensity (SI) of the reconstructions was linearly correlated with the FUS exposure level both in-vitro (r2 = 0.9878) and in-vivo (r2 = 0.9943), and SI level of the reconstructed focal beam also correlated with the success and level of BBB-opening. The proposed approach provides a feasible way to perform real-time and closed-loop control of FUS-based brain drug delivery. PMID:27295608
NASA Astrophysics Data System (ADS)
Xia, Jingjing; Tsui, Po-Hsiang; Liu, Hao-Li
2016-06-01
Burst-mode focused ultrasound (FUS) exposure has been shown to induce transient blood-brain barrier (BBB) opening for potential CNS drug delivery. FUS-BBB opening requires imaging guidance during the intervention, yet current imaging technology only enables postoperative outcome confirmation. In this study, we propose an approach to visualize short-burst low-pressure focal beam distribution that allows to be applied in FUS-BBB opening intervention on small animals. A backscattered acoustic-wave reconstruction method based on synchronization among focused ultrasound emission, diagnostic ultrasound receiving and passively beamformed processing were developed. We observed that focal beam could be successfully visualized for in vitro FUS exposure with 0.5-2 MHz without involvement of microbubbles. The detectable level of FUS exposure was 0.467 MPa in pressure and 0.05 ms in burst length. The signal intensity (SI) of the reconstructions was linearly correlated with the FUS exposure level both in-vitro (r2 = 0.9878) and in-vivo (r2 = 0.9943), and SI level of the reconstructed focal beam also correlated with the success and level of BBB-opening. The proposed approach provides a feasible way to perform real-time and closed-loop control of FUS-based brain drug delivery.
Vision-based system for the control and measurement of wastewater flow rate in sewer systems.
Nguyen, L S; Schaeli, B; Sage, D; Kayal, S; Jeanbourquin, D; Barry, D A; Rossi, L
2009-01-01
Combined sewer overflows and stormwater discharges represent an important source of contamination to the environment. However, the harsh environment inside sewers and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. In the following, we present and evaluate an in situ system for the monitoring of water flow in sewers based on video images. This paper focuses on the measurement of the water level based on image-processing techniques. The developed image-based water level algorithms identify the wall/water interface from sewer images and measure its position with respect to real world coordinates. A web-based user interface and a 3-tier system architecture enable the remote configuration of the cameras and the image-processing algorithms. Images acquired and processed by our system were found to reliably measure water levels and thereby to provide crucial information leading to better understand particular hydraulic behaviors. In terms of robustness and accuracy, the water level algorithm provided equal or better results compared to traditional water level probes in three different in situ configurations.
Violante, Inês R; Ribeiro, Maria J; Cunha, Gil; Bernardino, Inês; Duarte, João V; Ramos, Fabiana; Saraiva, Jorge; Silva, Eduardo; Castelo-Branco, Miguel
2012-01-01
Neurofibromatosis type 1 (NF1) is one of the most common single gene disorders affecting the human nervous system with a high incidence of cognitive deficits, particularly visuospatial. Nevertheless, neurophysiological alterations in low-level visual processing that could be relevant to explain the cognitive phenotype are poorly understood. Here we used functional magnetic resonance imaging (fMRI) to study early cortical visual pathways in children and adults with NF1. We employed two distinct stimulus types differing in contrast and spatial and temporal frequencies to evoke relatively different activation of the magnocellular (M) and parvocellular (P) pathways. Hemodynamic responses were investigated in retinotopically-defined regions V1, V2 and V3 and then over the acquired cortical volume. Relative to matched control subjects, patients with NF1 showed deficient activation of the low-level visual cortex to both stimulus types. Importantly, this finding was observed for children and adults with NF1, indicating that low-level visual processing deficits do not ameliorate with age. Moreover, only during M-biased stimulation patients with NF1 failed to deactivate or even activated anterior and posterior midline regions of the default mode network. The observation that the magnocellular visual pathway is impaired in NF1 in early visual processing and is specifically associated with a deficient deactivation of the default mode network may provide a neural explanation for high-order cognitive deficits present in NF1, particularly visuospatial and attentional. A link between magnocellular and default mode network processing may generalize to neuropsychiatric disorders where such deficits have been separately identified.
Low-complexity camera digital signal imaging for video document projection system
NASA Astrophysics Data System (ADS)
Hsia, Shih-Chang; Tsai, Po-Shien
2011-04-01
We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.
A 256×256 low-light-level CMOS imaging sensor with digital CDS
NASA Astrophysics Data System (ADS)
Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin
2016-10-01
In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.
Markov Processes in Image Processing
NASA Astrophysics Data System (ADS)
Petrov, E. P.; Kharina, N. L.
2018-05-01
Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.
Scalable software architecture for on-line multi-camera video processing
NASA Astrophysics Data System (ADS)
Camplani, Massimo; Salgado, Luis
2011-03-01
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.
Image Statistics and the Representation of Material Properties in the Visual Cortex
Baumgartner, Elisabeth; Gegenfurtner, Karl R.
2016-01-01
We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images. PMID:27582714
Image Statistics and the Representation of Material Properties in the Visual Cortex.
Baumgartner, Elisabeth; Gegenfurtner, Karl R
2016-01-01
We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images.
Joshi, Anuja; Gislason-Lee, Amber J; Keeble, Claire; Sivananthan, Uduvil M
2017-01-01
Objective: The aim of this research was to quantify the reduction in radiation dose facilitated by image processing alone for percutaneous coronary intervention (PCI) patient angiograms, without reducing the perceived image quality required to confidently make a diagnosis. Methods: Incremental amounts of image noise were added to five PCI angiograms, simulating the angiogram as having been acquired at corresponding lower dose levels (10–89% dose reduction). 16 observers with relevant experience scored the image quality of these angiograms in 3 states—with no image processing and with 2 different modern image processing algorithms applied. These algorithms are used on state-of-the-art and previous generation cardiac interventional X-ray systems. Ordinal regression allowing for random effects and the delta method were used to quantify the dose reduction possible by the processing algorithms, for equivalent image quality scores. Results: Observers rated the quality of the images processed with the state-of-the-art and previous generation image processing with a 24.9% and 15.6% dose reduction, respectively, as equivalent in quality to the unenhanced images. The dose reduction facilitated by the state-of-the-art image processing relative to previous generation processing was 10.3%. Conclusion: Results demonstrate that statistically significant dose reduction can be facilitated with no loss in perceived image quality using modern image enhancement; the most recent processing algorithm was more effective in preserving image quality at lower doses. Advances in knowledge: Image enhancement was shown to maintain perceived image quality in coronary angiography at a reduced level of radiation dose using computer software to produce synthetic images from real angiograms simulating a reduction in dose. PMID:28124572
NASA Astrophysics Data System (ADS)
Hewawasam, Kuravi; Mendillo, Christopher B.; Howe, Glenn A.; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya
2017-09-01
The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. The PICTURE-C low-order wavefront control (LOWC) system will be used to correct time-varying low-order aberrations due to pointing jitter, gravity sag, thermal deformation, and the gondola pendulum motion. We present the hardware and software implementation of the low-order ShackHartmann and reflective Lyot stop sensors. Development of the high-speed image acquisition and processing system is discussed with the emphasis on the reduction of hardware and computational latencies through the use of a real-time operating system and optimized data handling. By characterizing all of the LOWC latencies, we describe techniques to achieve a framerate of 200 Hz with a mean latency of ˜378 μs
Application of automatic threshold in dynamic target recognition with low contrast
NASA Astrophysics Data System (ADS)
Miao, Hua; Guo, Xiaoming; Chen, Yu
2014-11-01
Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.
Vannucci, Manila; Pelagatti, Claudia; Chiorri, Carlo; Mazzoni, Giuliana
2016-01-01
In the present study we examined whether higher levels of object imagery, a stable characteristic that reflects the ability and preference in generating pictorial mental images of objects, facilitate involuntary and voluntary retrieval of autobiographical memories (ABMs). Individuals with high (High-OI) and low (Low-OI) levels of object imagery were asked to perform an involuntary and a voluntary ABM task in the laboratory. Results showed that High-OI participants generated more involuntary and voluntary ABMs than Low-OI, with faster retrieval times. High-OI also reported more detailed memories compared to Low-OI and retrieved memories as visual images. Theoretical implications of these findings for research on voluntary and involuntary ABMs are discussed.
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
This paper describes work on the Multimission VICAR Planner (MVP) system to automatically construct executable image processing procedures for custom image processing requests for the JPL Multimission Image Processing Lab (MIPL). This paper focuses on two issues. First, large search spaces caused by complex plans required the use of hand encoded control information. In order to address this in a manner similar to that used by human experts, MVP uses a decomposition-based planner to implement hierarchical/skeletal planning at the higher level and then uses a classical operator based planner to solve subproblems in contexts defined by the high-level decomposition.
Discussion and a new method of optical cryptosystem based on interference
NASA Astrophysics Data System (ADS)
Lu, Dajiang; He, Wenqi; Liao, Meihua; Peng, Xiang
2017-02-01
A discussion and an objective security analysis of the well-known optical image encryption based on interference are presented in this paper. A new method is also proposed to eliminate the security risk of the original cryptosystem. For a possible practical application, we expand this new method into a hierarchical authentication scheme. In this authentication system, with a pre-generated and fixed random phase lock, different target images indicating different authentication levels are analytically encoded into corresponding phase-only masks (phase keys) and amplitude-only masks (amplitude keys). For the authentication process, a legal user can obtain a specified target image at the output plane if his/her phase key, and amplitude key, which should be settled close against the fixed internal phase lock, are respectively illuminated by two coherent beams. By comparing the target image with all the standard certification images in the database, the system can thus verify the user's legality even his/her identity level. Moreover, in despite of the internal phase lock of this system being fixed, the crosstalk between different pairs of keys held by different users is low. Theoretical analysis and numerical simulation are both provided to demonstrate the validity of this method.
What difference reveals about similarity.
Sagi, Eyal; Gentner, Dedre; Lovett, Andrew
2012-08-01
Detecting that two images are different is faster for highly dissimilar images than for highly similar images. Paradoxically, we showed that the reverse occurs when people are asked to describe how two images differ--that is, to state a difference between two images. Following structure-mapping theory, we propose that this disassociation arises from the multistage nature of the comparison process. Detecting that two images are different can be done in the initial (local-matching) stage, but only for pairs with low overlap; thus, "different" responses are faster for low-similarity than for high-similarity pairs. In contrast, identifying a specific difference generally requires a full structural alignment of the two images, and this alignment process is faster for high-similarity pairs. We described four experiments that demonstrate this dissociation and show that the results can be simulated using the Structure-Mapping Engine. These results pose a significant challenge for nonstructural accounts of similarity comparison and suggest that structural alignment processes play a significant role in visual comparison. Copyright © 2012 Cognitive Science Society, Inc.
Development of a semi-automated combined PET and CT lung lesion segmentation framework
NASA Astrophysics Data System (ADS)
Rossi, Farli; Mokri, Siti Salasiah; Rahni, Ashrani Aizzuddin Abd.
2017-03-01
Segmentation is one of the most important steps in automated medical diagnosis applications, which affects the accuracy of the overall system. In this paper, we propose a semi-automated segmentation method for extracting lung lesions from thoracic PET/CT images by combining low level processing and active contour techniques. The lesions are first segmented in PET images which are first converted to standardised uptake values (SUVs). The segmented PET images then serve as an initial contour for subsequent active contour segmentation of corresponding CT images. To evaluate its accuracy, the Jaccard Index (JI) was used as a measure of the accuracy of the segmented lesion compared to alternative segmentations from the QIN lung CT segmentation challenge, which is possible by registering the whole body PET/CT images to the corresponding thoracic CT images. The results show that our proposed technique has acceptable accuracy in lung lesion segmentation with JI values of around 0.8, especially when considering the variability of the alternative segmentations.
A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.
Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang
2011-07-01
The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.
NASA Astrophysics Data System (ADS)
Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart
2016-04-01
Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of single double or triple shots of flashed images enables reconstruction of the real-time corpuscular flow through the vessel system before and after device placement. This approach could enable 3D-insight of microscopic flow within blood vessels and aneurysms at submillimeter resolution. We present an approach that allows real-time assessment of 3D particle flow by high-speed light field image analysis including a solution that addresses high computational load by image processing. The imaging set-up accomplishes fast and reliable PIV analysis in transparent 3D models of brain aneurysms at low cost. High throughput microscopic flow assessment of different shapes of brain aneurysms may therefore be possibly required for patient specific device designs.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Quantification of chromatin condensation level by image processing.
Irianto, Jerome; Lee, David A; Knight, Martin M
2014-03-01
The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
A photon recycling approach to the denoising of ultra-low dose X-ray sequences.
Hariharan, Sai Gokul; Strobel, Norbert; Kaethner, Christian; Kowarschik, Markus; Demirci, Stefanie; Albarqouni, Shadi; Fahrig, Rebecca; Navab, Nassir
2018-06-01
Clinical procedures that make use of fluoroscopy may expose patients as well as the clinical staff (throughout their career) to non-negligible doses of radiation. The potential consequences of such exposures fall under two categories, namely stochastic (mostly cancer) and deterministic risks (skin injury). According to the "as low as reasonably achievable" principle, the radiation dose can be lowered only if the necessary image quality can be maintained. Our work improves upon the existing patch-based denoising algorithms by utilizing a more sophisticated noise model to exploit non-local self-similarity better and this in turn improves the performance of low-rank approximation. The novelty of the proposed approach lies in its properly designed and parameterized noise model and the elimination of initial estimates. This reduces the computational cost significantly. The algorithm has been evaluated on 500 clinical images (7 patients, 20 sequences, 3 clinical sites), taken at ultra-low dose levels, i.e. 50% of the standard low dose level, during electrophysiology procedures. An average improvement in the contrast-to-noise ratio (CNR) by a factor of around 3.5 has been found. This is associated with an image quality achieved at around 12 (square of 3.5) times the ultra-low dose level. Qualitative evaluation by X-ray image quality experts suggests that the method produces denoised images that comply with the required image quality criteria. The results are consistent with the number of patches used, and they demonstrate that it is possible to use motion estimation techniques and "recycle" photons from previous frames to improve the image quality of the current frame. Our results are comparable in terms of CNR to Video Block Matching 3D-a state-of-the-art denoising method. But qualitative analysis by experts confirms that the denoised ultra-low dose X-ray images obtained using our method are more realistic with respect to appearance.
Designing a stable feedback control system for blind image deconvolution.
Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan
2018-05-01
Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cogliati, M.; Tonelli, E.; Battaglia, D.; Scaioni, M.
2017-12-01
Archive aerial photos represent a valuable heritage to provide information about land content and topography in the past years. Today, the availability of low-cost and open-source solutions for photogrammetric processing of close-range and drone images offers the chance to provide outputs such as DEM's and orthoimages in easy way. This paper is aimed at demonstrating somehow and to which level of accuracy digitized archive aerial photos may be used within a such kind of low-cost software (Agisoft Photoscan Professional®) to generate photogrammetric outputs. Different steps of the photogrammetric processing workflow are presented and discussed. The main conclusion is that this procedure may come to provide some final products, which however do not feature the high accuracy and resolution that may be obtained using high-end photogrammetric software packages specifically designed for aerial survey projects. In the last part a case study is presented about the use of four-epoch archive of aerial images to analyze the area where a tunnel has to be excavated.
NASA Astrophysics Data System (ADS)
Zhang, Liandong; Bai, Xiaofeng; Song, De; Fu, Shencheng; Li, Ye; Duanmu, Qingduo
2015-03-01
Low-light-level night vision technology is magnifying low light level signal large enough to be seen by naked eye, which uses the photons - photoelectron as information carrier. Until the micro-channel plate was invented, it has been possibility for the realization of high performance and miniaturization of low-light-level night vision device. The device is double-proximity focusing low-light-level image intensifier which places a micro-channel plate close to photocathode and phosphor screen. The advantages of proximity focusing low-light-level night vision are small size, light weight, small power consumption, no distortion, fast response speed, wide dynamic range and so on. It is placed parallel to each other for Micro-channel plate (both sides of it with metal electrode), the photocathode and the phosphor screen are placed parallel to each other. The voltage is applied between photocathode and the input of micro-channel plate when image intensifier works. The emission electron excited by photo on the photocathode move towards to micro-channel plate under the electric field in 1st proximity focusing region, and then it is multiplied through the micro-channel. The movement locus of emission electrons can be calculated and simulated when the distributions of electrostatic field equipotential lines are determined in the 1st proximity focusing region. Furthermore the resolution of image tube can be determined. However the distributions of electrostatic fields and equipotential lines are complex due to a lot of micro-channel existing in the micro channel plate. This paper simulates electrostatic distribution of 1st proximity region in double-proximity focusing low-light-level image intensifier with the finite element simulation analysis software Ansoft maxwell 3D. The electrostatic field distributions of 1st proximity region are compared when the micro-channel plates' pore size, spacing and inclination angle ranged. We believe that the electron beam movement trajectory in 1st proximity region will be better simulated when the electronic electrostatic fields are simulated.
A Comprehensive Analysis of the Physical Properties of Advanced GaAs/AlGaAs Junctions
NASA Technical Reports Server (NTRS)
Menkara, Hicham M.
1996-01-01
Extensive studies have been performed on MQW junctions and structures because of their potential applications as avalanche photodetectors in optical communications and imaging systems. The role of the avalanche photodiode is to provide for the conversion of an optical signal into charge. Knowledge of junction physics, and the various carrier generation/recombination mechanisms, is crucial for effectively optimizing the conversion process and increasing the structure's quantum efficiency. In addition, the recent interest in the use of APDs in imaging systems has necessitated the development of semiconductor junctions with low dark currents and high gains for low light applications. Because of the high frame rate and high pixel density requirements in new imaging applications, it is necessary to provide some front-end gain in the imager to allow operation under reasonable light conditions. Understanding the electron/hole impact ionization process, as well as diffusion and surface leakage effects, is needed to help maintain low dark currents and high gains for such applications. In addition, the APD must be capable of operating with low power, and low noise. Knowledge of the effects of various doping configurations and electric field profiles, as well as the excess noise resulting from the avalanche process, are needed to help maintain low operating bias and minimize the noise output.
Design of a new type synchronous focusing mechanism
NASA Astrophysics Data System (ADS)
Zhang, Jintao; Tan, Ruijun; Chen, Zhou; Zhang, Yongqi; Fu, Panlong; Qu, Yachen
2018-05-01
Aiming at the dual channel telescopic imaging system composed of infrared imaging system, low-light-level imaging system and image fusion module, In the fusion of low-light-level images and infrared images, it is obvious that using clear source images is easier to obtain high definition fused images. When the target is imaged at 15m to infinity, focusing is needed to ensure the imaging quality of the dual channel imaging system; therefore, a new type of synchronous focusing mechanism is designed. The synchronous focusing mechanism realizes the focusing function through the synchronous translational imaging devices, mainly including the structure of the screw rod nut, the shaft hole coordination structure and the spring steel ball eliminating clearance structure, etc. Starting from the synchronous focusing function of two imaging devices, the structure characteristics of the synchronous focusing mechanism are introduced in detail, and the focusing range is analyzed. The experimental results show that the synchronous focusing mechanism has the advantages of ingenious design, high focusing accuracy and stable and reliable operation.
Rapid Waterborne Pathogen Detection with Mobile Electronics.
Wu, Tsung-Feng; Chen, Yu-Chen; Wang, Wei-Chung; Kucknoor, Ashwini S; Lin, Che-Jen; Lo, Yu-Hwa; Yao, Chun-Wei; Lian, Ian
2017-06-09
Pathogen detection in water samples, without complex and time consuming procedures such as fluorescent-labeling or culture-based incubation, is essential to public safety. We propose an immunoagglutination-based protocol together with the microfluidic device to quantify pathogen levels directly from water samples. Utilizing ubiquitous complementary metal-oxide-semiconductor (CMOS) imagers from mobile electronics, a low-cost and one-step reaction detection protocol is developed to enable field detection for waterborne pathogens. 10 mL of pathogen-containing water samples was processed using the developed protocol including filtration enrichment, immune-reaction detection and imaging processing. The limit of detection of 10 E. coli O157:H7 cells/10 mL has been demonstrated within 10 min of turnaround time. The protocol can readily be integrated into a mobile electronics such as smartphones for rapid and reproducible field detection of waterborne pathogens.
Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method
NASA Astrophysics Data System (ADS)
Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan
2018-04-01
Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.
Finding the bottom and using it
Sandoval, Ruben M.; Wang, Exing; Molitoris, Bruce A.
2014-01-01
Maximizing 2-photon parameters used in acquiring images for quantitative intravital microscopy, especially when high sensitivity is required, remains an open area of investigation. Here we present data on correctly setting the black level of the photomultiplier tube amplifier by adjusting the offset to allow for accurate quantitation of low intensity processes. When the black level is set too high some low intensity pixel values become zero and a nonlinear degradation in sensitivity occurs rendering otherwise quantifiable low intensity values virtually undetectable. Initial studies using a series of increasing offsets for a sequence of concentrations of fluorescent albumin in vitro revealed a loss of sensitivity for higher offsets at lower albumin concentrations. A similar decrease in sensitivity, and therefore the ability to correctly determine the glomerular permeability coefficient of albumin, occurred in vivo at higher offset. Finding the offset that yields accurate and linear data are essential for quantitative analysis when high sensitivity is required. PMID:25313346
Effects of quantum noise in 4D-CT on deformable image registration and derived ventilation data
NASA Astrophysics Data System (ADS)
Latifi, Kujtim; Huang, Tzung-Chi; Feygelman, Vladimir; Budzevich, Mikalai M.; Moros, Eduardo G.; Dilling, Thomas J.; Stevens, Craig W.; van Elmpt, Wouter; Dekker, Andre; Zhang, Geoffrey G.
2013-11-01
Quantum noise is common in CT images and is a persistent problem in accurate ventilation imaging using 4D-CT and deformable image registration (DIR). This study focuses on the effects of noise in 4D-CT on DIR and thereby derived ventilation data. A total of six sets of 4D-CT data with landmarks delineated in different phases, called point-validated pixel-based breathing thorax models (POPI), were used in this study. The DIR algorithms, including diffeomorphic morphons (DM), diffeomorphic demons (DD), optical flow and B-spline, were used to register the inspiration phase to the expiration phase. The DIR deformation matrices (DIRDM) were used to map the landmarks. Target registration errors (TRE) were calculated as the distance errors between the delineated and the mapped landmarks. Noise of Gaussian distribution with different standard deviations (SD), from 0 to 200 Hounsfield Units (HU) in amplitude, was added to the POPI models to simulate different levels of quantum noise. Ventilation data were calculated using the ΔV algorithm which calculates the volume change geometrically based on the DIRDM. The ventilation images with different added noise levels were compared using Dice similarity coefficient (DSC). The root mean square (RMS) values of the landmark TRE over the six POPI models for the four DIR algorithms were stable when the noise level was low (SD <150 HU) and increased with added noise when the level is higher. The most accurate DIR was DD with a mean RMS of 1.5 ± 0.5 mm with no added noise and 1.8 ± 0.5 mm with noise (SD = 200 HU). The DSC values between the ventilation images with and without added noise decreased with the noise level, even when the noise level was relatively low. The DIR algorithm most robust with respect to noise was DM, with mean DSC = 0.89 ± 0.01 and 0.66 ± 0.02 for the top 50% ventilation volumes, as compared between 0 added noise and SD = 30 and 200 HU, respectively. Although the landmark TRE were stable with low noise, the differences between ventilation images increased with noise level, even when the noise was low, indicating ventilation imaging from 4D-CT was sensitive to image noise. Therefore, high quality 4D-CT is essential for accurate ventilation images.
Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc
2014-05-01
Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.
Self-amplified CMOS image sensor using a current-mode readout circuit
NASA Astrophysics Data System (ADS)
Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick
2014-05-01
The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.
Live visualization of genomic loci with BiFC-TALE
Hu, Huan; Zhang, Hongmin; Wang, Sheng; Ding, Miao; An, Hui; Hou, Yingping; Yang, Xiaojing; Wei, Wensheng; Sun, Yujie; Tang, Chao
2017-01-01
Tracking the dynamics of genomic loci is important for understanding the mechanisms of fundamental intracellular processes. However, fluorescent labeling and imaging of such loci in live cells have been challenging. One of the major reasons is the low signal-to-background ratio (SBR) of images mainly caused by the background fluorescence from diffuse full-length fluorescent proteins (FPs) in the living nucleus, hampering the application of live cell genomic labeling methods. Here, combining bimolecular fluorescence complementation (BiFC) and transcription activator-like effector (TALE) technologies, we developed a novel method for labeling genomic loci (BiFC-TALE), which largely reduces the background fluorescence level. Using BiFC-TALE, we demonstrated a significantly improved SBR by imaging telomeres and centromeres in living cells in comparison with the methods using full-length FP. PMID:28074901
Live visualization of genomic loci with BiFC-TALE.
Hu, Huan; Zhang, Hongmin; Wang, Sheng; Ding, Miao; An, Hui; Hou, Yingping; Yang, Xiaojing; Wei, Wensheng; Sun, Yujie; Tang, Chao
2017-01-11
Tracking the dynamics of genomic loci is important for understanding the mechanisms of fundamental intracellular processes. However, fluorescent labeling and imaging of such loci in live cells have been challenging. One of the major reasons is the low signal-to-background ratio (SBR) of images mainly caused by the background fluorescence from diffuse full-length fluorescent proteins (FPs) in the living nucleus, hampering the application of live cell genomic labeling methods. Here, combining bimolecular fluorescence complementation (BiFC) and transcription activator-like effector (TALE) technologies, we developed a novel method for labeling genomic loci (BiFC-TALE), which largely reduces the background fluorescence level. Using BiFC-TALE, we demonstrated a significantly improved SBR by imaging telomeres and centromeres in living cells in comparison with the methods using full-length FP.
Liyanage, Kishan Andre; Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O'Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James
2016-01-01
Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.
Three-dimensional image analysis as a tool for embryology
NASA Astrophysics Data System (ADS)
Verweij, Andre
1992-06-01
In the study of cell fate, cell lineage, and morphogenetic transformation it is necessary to obtain 3-D data. Serial sections of glutaraldehyde fixed and glycol methacrylate embedded material provide high resolution data. Clonal spread during germ layer formation in the mouse embryo has been followed by labeling a progenitor epiblast cell with horseradish peroxidase and staining its descendants one or two days later, followed by histological processing. Reconstruction of a 3-D image from histological sections must provide a solution for the alignment problem. As we want to study images at different magnification levels, we have chosen a method in which the sections are aligned under the microscope. Positioning is possible through a translation and a rotation stage. The first step for reconstruction is a coarse alignment on the basis of the moments in a binary, low magnification image of the embedding block. Thereafter, images of higher magnification levels are aligned by optimizing a similarity measure between the images. To analyze, first a global 3-D second order surface is fitted on the image to obtain the orientation of the embryo. The coefficients of this fit are used to normalize the size of the different embryos. Thereafter, the image is resampled with respect to the surface to create a 2-D mapping of the embryo and to guide the segmentation of the different cell layers which make up the embryo.
Joint level-set and spatio-temporal motion detection for cell segmentation.
Boukari, Fatima; Makrogiannis, Sokratis
2016-08-10
Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.
NASA Technical Reports Server (NTRS)
Masuoka, E.; Rose, J.; Quattromani, M.
1981-01-01
Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.
Nonlinear spike-and-slab sparse coding for interpretable image encoding.
Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding
Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
NASA Astrophysics Data System (ADS)
Kröhnert, M.; Meichsner, R.
2017-09-01
The relevance of globally environmental issues gains importance since the last years with still rising trends. Especially disastrous floods may cause in serious damage within very short times. Although conventional gauging stations provide reliable information about prevailing water levels, they are highly cost-intensive and thus just sparsely installed. Smartphones with inbuilt cameras, powerful processing units and low-cost positioning systems seem to be very suitable wide-spread measurement devices that could be used for geo-crowdsourcing purposes. Thus, we aim for the development of a versatile mobile water level measurement system to establish a densified hydrological network of water levels with high spatial and temporal resolution. This paper addresses a key issue of the entire system: the detection of running water shore lines in smartphone images. Flowing water never appears equally in close-range images even if the extrinsics remain unchanged. Its non-rigid behavior impedes the use of good practices for image segmentation as a prerequisite for water line detection. Consequently, we use a hand-held time lapse image sequence instead of a single image that provides the time component to determine a spatio-temporal texture image. Using a region growing concept, the texture is analyzed for immutable shore and dynamic water areas. Finally, the prevalent shore line is examined by the resultant shapes. For method validation, various study areas are observed from several distances covering urban and rural flowing waters with different characteristics. Future work provides a transformation of the water line into object space by image-to-geometry intersection.
Evidence-Based Imaging Guidelines and Medicare Payment Policy
Sistrom, Christopher L; McKay, Niccie L
2008-01-01
Objective This study examines the relationship between evidence-based appropriateness criteria for neurologic imaging procedures and Medicare payment determinations. The primary research question is whether Medicare is more likely to pay for imaging procedures as the level of appropriateness increases. Data Sources The American College of Radiology Appropriateness Criteria (ACRAC) for neurological imaging, ICD-9-CM codes, CPT codes, and payment determinations by the Medicare Part B carrier for Florida and Connecticut. Study Design Cross-sectional study of appropriateness criteria and Medicare Part B payment policy for neurological imaging. In addition to descriptive and bivariate statistics, multivariate logistic regression on payment determination (yes or no) was performed. Data Collection Methods The American College of Radiology Appropriateness Criteria (ACRAC) documents specific to neurological imaging, ICD-9-CM codes, and CPT codes were used to create 2,510 medical condition/imaging procedure combinations, with associated appropriateness scores (coded as low/middle/high). Principal Findings As the level of appropriateness increased, more medical condition/imaging procedure combinations were payable (low = 61 percent, middle = 70 percent, and high = 74 percent). Logistic regression indicated that the odds of a medical condition/imaging procedure combination with a middle level of appropriateness being payable was 48 percent higher than for an otherwise similar combination with a low appropriateness score (95 percent CI on odds ratio=1.19–1.84). The odds ratio for being payable between high and low levels of appropriateness was 2.25 (95 percent CI: 1.66–3.04). Conclusions Medicare could improve its payment determinations by taking advantage of existing clinical guidelines, appropriateness criteria, and other authoritative resources for evidence-based practice. Such an approach would give providers a financial incentive that is aligned with best-practice medicine. In particular, Medicare should review and update its payment policies to reflect current information on the appropriateness of alternative imaging procedures for the same medical condition. PMID:18454778
Guo, Kun; Soornack, Yoshi; Settle, Rebecca
2018-03-05
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.
Petersen, Lars J.; Nielsen, Julie B.; Dettmann, Katja; Fisker, Rune V.; Haberkorn, Uwe; Stenholt, Louise; Zacho, Helle D.
2017-01-01
Localization of prostate cancer recurrence, particularly in the bones, is a major challenge with standard of care imaging in patients with biochemical recurrence following curatively intended treatment. Gallium-68-labeled prostate specific membrane antigen positron emission tomography/computed tomography (68Ga-PSMA PET/CT) is a novel and promising method for imaging in prostate cancer. The present study reports two cases of patients with prostate cancer with biochemical recurrence, with evidence of bone metastases on 68Ga-PSMA PET/CT images and low prostate specific antigen PSA levels (<2 ng/ml) and PSA doubling time >6 months. The bone metastases were verified by supplementary imaging with 18F-sodium fluoride PET/CT and magnetic resonance imaging as well as biochemical responses to androgen deprivation therapy. Therefore, 68Ga-PSMA PET/CT is promising for the restaging of patients with prostate cancer with biochemical recurrence, including patients with low PSA levels and low PSA kinetics. PMID:28685078
Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
Purpose of the Study: 99mTechnetium-methylene diphosphonate (99mTc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99mTc-MDP-bone scan images. Materials and Methods: A set of 89 low contrast 99mTc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. Results: This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t-test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. Conclusion: GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful. PMID:29142344
Sparse representation-based image restoration via nonlocal supervised coding
NASA Astrophysics Data System (ADS)
Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng
2016-10-01
Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Pi, Shiqiang; Liu, Wenzhong; Jiang, Tao
2018-03-01
The magnetic transparency of biological tissue allows the magnetic nanoparticle (MNP) to be a promising functional sensor and contrast agent. The complex susceptibility of MNPs, strongly influenced by particle concentration, excitation magnetic field and their surrounding microenvironment, provides significant implications for biomedical applications. Therefore, magnetic susceptibility imaging of high spatial resolution will give more detailed information during the process of MNP-aided diagnosis and therapy. In this study, we present a novel spatial magnetic susceptibility extraction method for MNPs under a gradient magnetic field, a low-frequency drive magnetic field, and a weak strength high-frequency magnetic field. Based on this novel method, a magnetic particle susceptibility imaging (MPSI) of millimeter-level spatial resolution (<3 mm) was achieved using our homemade imaging system. Corroborated by the experimental results, the MPSI shows real-time (1 s per frame acquisition) and quantitative abilities, and isotropic high resolution.
Color reproduction software for a digital still camera
NASA Astrophysics Data System (ADS)
Lee, Bong S.; Park, Du-Sik; Nam, Byung D.
1998-04-01
We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
Sensitivity of Attitude Determination on the Model Assumed for ISAR Radar Mappings
NASA Astrophysics Data System (ADS)
Lemmens, S.; Krag, H.
2013-09-01
Inverse synthetic aperture radars (ISAR) are valuable instrumentations for assessing the state of a large object in low Earth orbit. The images generated by these radars can reach a sufficient quality to be used during launch support or contingency operations, e.g. for confirming the deployment of structures, determining the structural integrity, or analysing the dynamic behaviour of an object. However, the direct interpretation of ISAR images can be a demanding task due to the nature of the range-Doppler space in which these images are produced. Recently, a tool has been developed by the European Space Agency's Space Debris Office to generate radar mappings of a target in orbit. Such mappings are a 3D-model based simulation of how an ideal ISAR image would be generated by a ground based radar under given processing conditions. These radar mappings can be used to support a data interpretation process. E.g. by processing predefined attitude scenarios during an observation sequence and comparing them with actual observations, one can detect non-nominal behaviour. Vice versa, one can also estimate the attitude states of the target by fitting the radar mappings to the observations. It has been demonstrated for the latter use case that a coarse approximation of the target through an 3D-model is already sufficient to derive the attitude information from the generated mappings. The level of detail required for the 3D-model is determined by the process of generating ISAR images, which is based on the theory of scattering bodies. Therefore, a complex surface can return an intrinsically noisy ISAR image. E.g. when many instruments on a satellite are visible to the observer, the ISAR image can suffer from multipath reflections. In this paper, we will further analyse the sensitivity of the attitude fitting algorithms to variations in the dimensions and the level of detail of the underlying 3D model. Moreover, we investigate the ability to estimate the orientations of different spacecraft components with respect to each other from the fitting procedure.
Tremsin, Anton S.; Makowska, Małgorzata G.; Perrodin, Didier; ...
2016-04-12
Neutrons are known to be unique probes in situations where other types of radiation fail to penetrate samples and their surrounding structures. In this paper it is demonstrated how thermal and cold neutron radiography can provide time-resolved imaging of materials while they are being processed (e.g.while growing single crystals). The processing equipment, in this case furnaces, and the scintillator materials are opaque to conventional X-ray interrogation techniques. The distribution of the europium activator within a BaBrCl:Eu scintillator (0.1 and 0.5% nominal doping concentrations per mole) is studiedin situduring the melting and solidification processes with a temporal resolution of 5–7 s.more » The strong tendency of the Eu dopant to segregate during the solidification process is observed in repeated cycles, with Eu forming clusters on multiple length scales (only for clusters larger than ~50 µm, as limited by the resolution of the present experiments). It is also demonstrated that the dopant concentration can be quantified even for very low concentration levels (~0.1%) in 10 mm thick samples. The interface between the solid and liquid phases can also be imaged, provided there is a sufficient change in concentration of one of the elements with a sufficient neutron attenuation cross section. Tomographic imaging of the BaBrCl:0.1%Eu sample reveals a strong correlation between crystal fractures and Eu-deficient clusters. The results of these experiments demonstrate the unique capabilities of neutron imaging forin situdiagnostics and the optimization of crystal-growth procedures.« less
Hare, Dominic J.; Kysenius, Kai; Paul, Bence; Knauer, Beate; Hutchinson, Robert W.; O'Connor, Ciaran; Fryer, Fred; Hennessey, Tom P.; Bush, Ashley I.; Crouch, Peter J.; Doble, Philip A.
2017-01-01
Metals are found ubiquitously throughout an organism, with their biological role dictated by both their chemical reactivity and abundance within a specific anatomical region. Within the brain, metals have a highly compartmentalized distribution, depending on the primary function they play within the central nervous system. Imaging the spatial distribution of metals has provided unique insight into the biochemical architecture of the brain, allowing direct correlation between neuroanatomical regions and their known function with regard to metal-dependent processes. In addition, several age-related neurological disorders feature disrupted metal homeostasis, which is often confined to small regions of the brain that are otherwise difficult to analyze. Here, we describe a comprehensive method for quantitatively imaging metals in the mouse brain, using laser ablation - inductively coupled plasma - mass spectrometry (LA-ICP-MS) and specially designed image processing software. Focusing on iron, copper and zinc, which are three of the most abundant and disease-relevant metals within the brain, we describe the essential steps in sample preparation, analysis, quantitative measurements and image processing to produce maps of metal distribution within the low micrometer resolution range. This technique, applicable to any cut tissue section, is capable of demonstrating the highly variable distribution of metals within an organ or system, and can be used to identify changes in metal homeostasis and absolute levels within fine anatomical structures. PMID:28190025
Imaging model for the scintillator and its application to digital radiography image enhancement.
Wang, Qian; Zhu, Yining; Li, Hongwei
2015-12-28
Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.
McCarthy, Laura Mary; Kalinyak-Fliszar, Michelene; Kohen, Francine; Martin, Nadine
2017-01-01
Deep dysphasia is a relatively rare subcategory of aphasia, characterised by word repetition impairment and a profound auditory-verbal short-term memory (STM) limitation. Repetition of words is better than nonwords (lexicality effect) and better for high-image than low-image words (imageability effect). Another related language impairment profile is phonological dysphasia, which includes all of the characteristics of deep dysphasia except for the occurrence of semantic errors in single word repetition. The overlap in symptoms of deep and phonological dysphasia has led to the hypothesis that they share the same root cause, impaired maintenance of activated representation of words, but that they differ in severity of that impairment, with deep dysphasia being more severe. We report a single-subject multiple baseline, multiple probe treatment study of a person who presented with a pattern of repetition that was consistent with the continuum of deep-phonological dysphasia: imageability and lexicality effects in repetition of single and multiple words and semantic errors in repetition of multiple-word utterances. The aim of this treatment study was to improve access to and repetition of low-imageability words by embedding them in modifier-noun phrases that enhanced their imageability. The treatment involved repetition of abstract noun pairs. We created modifier-abstract noun phrases that increased the semantic and syntactic cohesiveness of the words in the pair. For example, the phrases "long distance" and "social exclusion" were developed to improve repetition of the abstract pair "distance-exclusion". The goal of this manipulation was to increase the probability of accessing lexical and semantic representations of abstract words in repetition by enriching their semantic -syntactic context. We predicted that this increase in accessibility would be maintained when the words were repeated as pairs, but without the contextual phrase. Treatment outcomes indicated that increasing the semantic and syntactic cohesiveness of low-imageability and low-frequency words later improved this participant's ability to repeat those words when presented in isolation. This treatment approach to improving access to abstract word pairs for repetition was successful for our participant with phonological dysphasia. The approach exemplifies the potential value in manipulating linguistic characteristics of stimuli in ways that improve access between phonological and lexical-semantic levels of representation. Additionally, this study demonstrates how principles of a cognitive model of word processing can be used to guide treatment of word processing impairments in aphasia.
Chao, Linda L.; Rothlind, Johannes C.; Cardenas, Valerie A.; Meyerhoff, Dieter J.; Weiner, Michael W.
2010-01-01
Background Potentially more than 100,000 US troops may have been exposed to the organophosphate chemical warfare agents sarin (GB) and cyclosarin (GF) when a munitions dump at Khamisiyah, Iraq was destroyed during the Gulf War (GW) in 1991. Although little is known about the long-term neurobehavioral or neurophysiological effects of low-dose exposure to GB/GF in humans, recent studies of GW veterans from the Devens Cohort suggest decrements in certain cognitive domains and atrophy in brain white matter occur individuals with higher estimated levels of presumed GB/GF exposure. The goal of the current study is to determine the generalizability of these findings in another cohort of GW veterans with suspected GB/GF exposure. Methods Neurobehavioral and imaging data collected in a study on Gulf War Illness between 2002–2007 were used in this study. We focused on the data of 40 GW-deployed veterans categorized as having been exposed to GB/GF at Khamisiyah, Iraq and 40 matched controls. Magnetic resonance images (MRI) of the brain were analyzed using automated and semi-automated image processing techniques that produced volumetric measurements of gray matter (GM), white matter (WM), cerebrospinal fluid (CSF) and hippocampus. Results GW veterans with suspected GB/GF exposure had reduced total GM and hippocampal volumes compared to their unexposed peers (p≤0.01). Although there were no group differences in measures of cognitive function or total WM volume, there were significant, positive correlations between total WM volume and measures of executive function and visuospatial abilities in veterans with suspected GB/GF exposure. Conclusions These findings suggest that low-level exposure to GB/GF can have deleterious effects on brain structure and brain function more than decade later. PMID:20580739
NASA Astrophysics Data System (ADS)
Skala, Melissa C.; Crow, Matthew J.; Wax, Adam; Izatt, Joseph A.
2009-02-01
Molecular imaging is a powerful tool for investigating disease processes and potential therapies in both in vivo and in vitro systems. However, high resolution molecular imaging has been limited to relatively shallow penetration depths that can be accessed with microscopy. Optical coherence tomography (OCT) is an optical analogue to ultrasound with relatively good penetration depth (1-2 mm) and resolution (~1-10 μm). We have developed and characterized photothermal OCT as a molecular contrast mechanism that allows for high resolution molecular imaging at deeper penetration depths than microscopy. Our photothermal system consists of an amplitude-modulated heating beam that spatially overlaps with the focused spot of the sample arm of a spectral-domain OCT microscope. Validation experiments in tissue-like phantoms containing gold nanospheres that absorb at 532 nm revealed a sensitivity of 14 parts per million nanospheres (weight/weight) in a tissue-like environment. The nanospheres were then conjugated to anti-EGFR, and molecular targeting was confirmed in cells that over-express EGFR (MDA-MB-468) and cells that express low levels of EGFR (MDA-MB-435). Molecular imaging in three-dimensional tissue constructs was confirmed with a significantly lower photothermal signal (p<0.0001) from the constructs composed of cells that express low levels of EGFR compared to the over-expressing cell constructs (300% signal increase). This technique could potentially augment confocal and multiphoton microscopy as a method for deep-tissue, depth-resolved molecular imaging with relatively high resolution and target sensitivity, without photobleaching or cytotoxicity.
NASA Astrophysics Data System (ADS)
Li, Liangliang; Si, Yujuan; Jia, Zhenhong
2018-03-01
In this paper, a novel microscopy mineral image enhancement method based on adaptive threshold in non-subsampled shearlet transform (NSST) domain is proposed. First, the image is decomposed into one low-frequency sub-band and several high-frequency sub-bands. Second, the gamma correction is applied to process the low-frequency sub-band coefficients, and the improved adaptive threshold is adopted to suppress the noise of the high-frequency sub-bands coefficients. Third, the processed coefficients are reconstructed with the inverse NSST. Finally, the unsharp filter is used to enhance the details of the reconstructed image. Experimental results on various microscopy mineral images demonstrated that the proposed approach has a better enhancement effect in terms of objective metric and subjective metric.
NASA Astrophysics Data System (ADS)
González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro
2012-07-01
A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.
Dust-penetrating (DUSPEN) see-through lidar for helicopter situational awareness in DVE
NASA Astrophysics Data System (ADS)
Murray, James T.; Seely, Jason; Plath, Jeff; Gotfredson, Eric; Engel, John; Ryder, Bill; Van Lieu, Neil; Goodwin, Ron; Wagner, Tyler; Fetzer, Greg; Kridler, Nick; Melancon, Chris; Panici, Ken; Mitchell, Anthony
2013-10-01
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating (DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC) Helicopter Low-Level Operations (HELO) Product 2 program. Areté's DUSPEN system captures full lidar waveforms and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
Dust-Penetrating (DUSPEN) "see-through" lidar for helicopter situational awareness in DVE
NASA Astrophysics Data System (ADS)
Murray, James T.; Seely, Jason; Plath, Jeff; Gotfreson, Eric; Engel, John; Ryder, Bill; Van Lieu, Neil; Goodwin, Ron; Wagner, Tyler; Fetzer, Greg; Kridler, Nick; Melancon, Chris; Panici, Ken; Mitchell, Anthony
2013-05-01
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating (DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC) Helicopter Low-Level Operations (HELO) Product 2 program. Areté's DUSPEN system captures full lidar waveforms and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
Ghodrati, Masoud; Ghodousi, Mahrad; Yoonessi, Ali
2016-01-01
Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP) in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs' power within theta frequency band (~3-7 Hz). This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception.
Ghodrati, Masoud; Ghodousi, Mahrad; Yoonessi, Ali
2016-01-01
Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP) in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs' power within theta frequency band (~3–7 Hz). This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception. PMID:28018197
A Nonlinear Diffusion Equation-Based Model for Ultrasound Speckle Noise Removal
NASA Astrophysics Data System (ADS)
Zhou, Zhenyu; Guo, Zhichang; Zhang, Dazhi; Wu, Boying
2018-04-01
Ultrasound images are contaminated by speckle noise, which brings difficulties in further image analysis and clinical diagnosis. In this paper, we address this problem in the view of nonlinear diffusion equation theories. We develop a nonlinear diffusion equation-based model by taking into account not only the gradient information of the image, but also the information of the gray levels of the image. By utilizing the region indicator as the variable exponent, we can adaptively control the diffusion type which alternates between the Perona-Malik diffusion and the Charbonnier diffusion according to the image gray levels. Furthermore, we analyze the proposed model with respect to the theoretical and numerical properties. Experiments show that the proposed method achieves much better speckle suppression and edge preservation when compared with the traditional despeckling methods, especially in the low gray level and low-contrast regions.
Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H
2017-07-01
The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.
Salient region detection by fusing bottom-up and top-down features extracted from a single image.
Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng
2014-10-01
Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.
Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data.
Gu, Ke; Tao, Dacheng; Qiao, Jun-Fei; Lin, Weisi
2018-04-01
In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.
Meermeier, Annegret; Gremmler, Svenja; Richert, Kerstin; Eckermann, Til; Lappe, Markus
2017-10-01
Saccadic adaptation is an oculomotor learning process that maintains the accuracy of eye movements to ensure effective perception of the environment. Although saccadic adaptation is commonly considered an automatic and low-level motor calibration in the cerebellum, we recently found that strength of adaptation is influenced by the visual content of the target: pictures of humans produced stronger adaptation than noise stimuli. This suggests that meaningful images may be considered rewarding or valuable in oculomotor learning. Here we report three experiments that establish the boundaries of this effect. In the first, we tested whether stimuli that were associated with high and low value following long term self-administered reinforcement learning produce stronger adaptation. Twenty-eight expert gamers participated in two sessions of adaptation to game-related high- and low-reward stimuli, but revealed no difference in saccadic adaptation (Bayes Factor01 = 5.49). In the second experiment, we tested whether cognitive (literate) meaning could induce stronger adaptation by comparing targets consisting of words and nonwords. The results of twenty subjects revealed no difference in adaptation strength (Bayes Factor01 = 3.21). The third experiment compared images of human figures to noise patterns for reactive saccades. Twenty-two subjects adapted significantly more toward images of human figures in comparison to noise (p < 0.001). We conclude that only primary (human vs. noise), but not secondary, reinforcement affects saccadic adaptation (words vs. nonwords, high- vs. low-value video game images).
Low-level contrast statistics are diagnostic of invariance of natural textures
Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven
2012-01-01
Texture may provide important clues for real world object and scene perception. To be reliable, these clues should ideally be invariant to common viewing variations such as changes in illumination and orientation. In a large image database of natural materials, we found textures with low-level contrast statistics that varied substantially under viewing variations, as well as textures that remained relatively constant. This led us to ask whether textures with constant contrast statistics give rise to more invariant representations compared to other textures. To test this, we selected natural texture images with either high (HV) or low (LV) variance in contrast statistics and presented these to human observers. In two distinct behavioral categorization paradigms, participants more often judged HV textures as “different” compared to LV textures, showing that textures with constant contrast statistics are perceived as being more invariant. In a separate electroencephalogram (EEG) experiment, evoked responses to single texture images (single-image ERPs) were collected. The results show that differences in contrast statistics correlated with both early and late differences in occipital ERP amplitude between individual images. Importantly, ERP differences between images of HV textures were mainly driven by illumination angle, which was not the case for LV images: there, differences were completely driven by texture membership. These converging neural and behavioral results imply that some natural textures are surprisingly invariant to illumination changes and that low-level contrast statistics are diagnostic of the extent of this invariance. PMID:22701419
NASA Technical Reports Server (NTRS)
Lorre, J. J.; Lynn, D. J.; Benton, W. D.
1976-01-01
Several techniques of a digital image-processing nature are illustrated which have proved useful in visual analysis of astronomical pictorial data. Processed digital scans of photographic plates of Stephans Quintet and NGC 4151 are used as examples to show how faint nebulosity is enhanced by high-pass filtering, how foreground stars are suppressed by linear interpolation, and how relative color differences between two images recorded on plates with different spectral sensitivities can be revealed by generating ratio images. Analyses are outlined which are intended to compensate partially for the blurring effects of the atmosphere on images of Stephans Quintet and to obtain more detailed information about Saturn's ring structure from low- and high-resolution scans of the planet and its ring system. The employment of a correlation picture to determine the tilt angle of an average spectral line in a low-quality spectrum is demonstrated for a section of the spectrum of Uranus.
Graafen, Dirk; Franzoni, María Belén; Schreiber, Laura M; Spiess, Hans W; Münnemann, Kerstin
2016-01-01
Hyperpolarization is a powerful tool to overcome the low sensitivity of nuclear magnetic resonance (NMR). However, applications are limited due to the short lifetime of this non equilibrium spin state caused by relaxation processes. This issue can be addressed by storing hyperpolarization in slowly decaying singlet spin states which was so far mostly demonstrated for non-proton spin pairs, e.g. (13)C-(13)C. Protons hyperpolarized by parahydrogen induced polarization (PHIP) in symmetrical molecules, are very well suited for this strategy because they naturally exhibit a long-lived singlet state. The conversion of the NMR silent singlet spin state to observable magnetization can be achieved by making use of singlet-triplet level anticrossings. In this study, a low-power radiofrequency pulse sequence is used for this purpose, which allows multiple successive singlet-triplet conversions. The generated magnetization is used to record proton images in a clinical magnetic resonance imaging (MRI) system, after 3min waiting time. Our results may open unprecedented opportunities to use the standard MRI nucleus (1)H for e.g. metabolic imaging in the future. Copyright © 2016. Published by Elsevier Inc.
Research-grade CMOS image sensors for demanding space applications
NASA Astrophysics Data System (ADS)
Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre
2004-06-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.
Research-grade CMOS image sensors for demanding space applications
NASA Astrophysics Data System (ADS)
Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre
2017-11-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.
Image and Sensor Data Processing for Target Acquisition and Recognition.
1980-11-01
technological, cost, size and weight constraints. The critical problem seems to be detecting the target with an acceptable false alarm rate, rather than...for its operation, loss of detail can result from adjacent differing pixels being forced into the same displayed grey-level. This may be detected in...a) Enhanced (b) Fig.1 High contrast scene demonstrating loss of detail in a local area of low contrast Luminance T V Black Peak white wavelength 1
MT6415CA: a 640×512-15µm CTIA ROIC for SWIR InGaAs detector arrays
NASA Astrophysics Data System (ADS)
Eminoglu, Selim; Isikhan, Murat; Bayhan, Nusret; Gulden, M. Ali; Incedere, O. Samet; Soyer, S. Tuncer; Kocak, Serhat; Yilmaz, Gokhan S.; Akin, Tayfun
2013-06-01
This paper reports the development of a new low-noise CTIA ROIC (MT6415CA) suitable for SWIR InGaAs detector arrays for low-light imaging applications. MT6415CA is the second product in the MT6400 series ROICs from Mikro-Tasarim Ltd., which is a fabless IC design house specialized in the development of monolithic imaging sensors and ROICs for hybrid imaging sensors. MT6415CA is a low-noise snapshot CTIA ROIC, has a format of 640 × 512 and pixel pitch of 15 µm, and has been developed with the system-on-chip architecture in mind, where all the timing and biasing for this ROIC are generated on-chip without requiring any external inputs. MT6415CA is a highly configurable ROIC, where many of its features can be programmed through a 3-wire serial interface allowing on-the-fly configuration of many ROIC features. It performs snapshot operation both using Integrate-Then-Read (ITR) and Integrate-While-Read (IWR) modes. The CTIA type pixel input circuitry has three gain modes with programmable full-well-capacity (FWC) values of 10.000 e-, 20.000 e-, and 350.000 e- in the very high gain (VHG), high-gain (HG), and low-gain (LG) modes, respectively. MT6415CA has an input referred noise level of less than 5 e- in the very high gain (VHG) mode, suitable for very low-noise SWIR imaging applications. MT6415CA has 8 analog video outputs that can be programmed in 8, 4, or 2-output modes with a selectable analog reference for pseudo-differential operation. The ROIC runs at 10 MHz and supports frame rate values up to 200 fps in the 8-output mode. The integration time can be programmed up to 1s in steps of 0.1 µs. The ROIC uses 3.3 V and 1.8V supply voltages and dissipates less than 150 mW in the 4-output mode. MT6415CA is fabricated using a modern mixed-signal CMOS process on 200 mm CMOS wafers, and tested parts are available at wafer or die levels with test reports and wafer maps. A compact USB 3.0 camera and imaging software have been developed to demonstrate the imaging performance of SWIR sensors built with MT6415CA ROIC
Mori, Yutaka; Nomura, Takanori
2013-06-01
In holographic displays, it is undesirable to observe the speckle noises with the reconstructed images. A method for improvement of reconstructed image quality by synthesizing low-coherence digital holograms is proposed. It is possible to obtain speckleless reconstruction of holograms due to low-coherence digital holography. An image sensor records low-coherence digital holograms, and the holograms are synthesized by computational calculation. Two approaches, the threshold-processing and the picking-a-peak methods, are proposed in order to reduce random noise of low-coherence digital holograms. The reconstructed image quality by the proposed methods is compared with the case of high-coherence digital holography. Quantitative evaluation is given to confirm the proposed methods. In addition, the visual evaluation by 15 people is also shown.
Silva, Paolo S; Walia, Saloni; Cavallerano, Jerry D; Sun, Jennifer K; Dunn, Cheri; Bursell, Sven-Erik; Aiello, Lloyd M; Aiello, Lloyd Paul
2012-09-01
To compare agreement between diagnosis of clinical level of diabetic retinopathy (DR) and diabetic macular edema (DME) derived from nonmydriatic fundus images using a digital camera back optimized for low-flash image capture (MegaVision) compared with standard seven-field Early Treatment Diabetic Retinopathy Study (ETDRS) photographs and dilated clinical examination. Subject comfort and image acquisition time were also evaluated. In total, 126 eyes from 67 subjects with diabetes underwent Joslin Vision Network nonmydriatic retinal imaging. ETDRS photographs were obtained after pupillary dilation, and fundus examination was performed by a retina specialist. There was near-perfect agreement between MegaVision and ETDRS photographs (κ=0.81, 95% confidence interval [CI] 0.73-0.89) for clinical DR severity levels. Substantial agreement was observed with clinical examination (κ=0.71, 95% CI 0.62-0.80). For DME severity level there was near-perfect agreement with ETDRS photographs (κ=0.92, 95% CI 0.87-0.98) and moderate agreement with clinical examination (κ=0.58, 95% CI 0.46-0.71). The wider MegaVision 45° field led to identification of nonproliferative changes in areas not imaged by the 30° field of ETDRS photos. Field area unique to ETDRS photographs identified proliferative changes not visualized with MegaVision. Mean MegaVision acquisition time was 9:52 min. After imaging, 60% of subjects preferred the MegaVision lower flash settings. When evaluated using a rigorous protocol, images captured using a low-light digital camera compared favorably with ETDRS photography and clinical examination for grading level of DR and DME. Furthermore, these data suggest the importance of more extensive peripheral images and suggest that utilization of wide-field retinal imaging may further improve accuracy of DR assessment.
Reducing Error Rates for Iris Image using higher Contrast in Normalization process
NASA Astrophysics Data System (ADS)
Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa
2017-08-01
Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.
Proteus: a reconfigurable computational network for computer vision
NASA Astrophysics Data System (ADS)
Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.
1992-04-01
The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.
CMOS image sensor with contour enhancement
NASA Astrophysics Data System (ADS)
Meng, Liya; Lai, Xiaofeng; Chen, Kun; Yuan, Xianghui
2010-10-01
Imitating the signal acquisition and processing of vertebrate retina, a CMOS image sensor with bionic pre-processing circuit is designed. Integration of signal-process circuit on-chip can reduce the requirement of bandwidth and precision of the subsequent interface circuit, and simplify the design of the computer-vision system. This signal pre-processing circuit consists of adaptive photoreceptor, spatial filtering resistive network and Op-Amp calculation circuit. The adaptive photoreceptor unit with a dynamic range of approximately 100 dB has a good self-adaptability for the transient changes in light intensity instead of intensity level itself. Spatial low-pass filtering resistive network used to mimic the function of horizontal cell, is composed of the horizontal resistor (HRES) circuit and OTA (Operational Transconductance Amplifier) circuit. HRES circuit, imitating dendrite of the neuron cell, comprises of two series MOS transistors operated in weak inversion region. Appending two diode-connected n-channel transistors to a simple transconductance amplifier forms the OTA Op-Amp circuit, which provides stable bias voltage for the gate of MOS transistors in HRES circuit, while serves as an OTA voltage follower to provide input voltage for the network nodes. The Op-Amp calculation circuit with a simple two-stage Op-Amp achieves the image contour enhancing. By adjusting the bias voltage of the resistive network, the smoothing effect can be tuned to change the effect of image's contour enhancement. Simulations of cell circuit and 16×16 2D circuit array are implemented using CSMC 0.5μm DPTM CMOS process.
Towards Guided Underwater Survey Using Light Visual Odometry
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.
2017-02-01
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.
Pérez, Alberto J.; Braga, Roberto; Perles, Ángel; Pérez–Marín, Eva; García-Diego, Fernando J.
2018-01-01
Dynamic laser speckle (DLS) is used as a reliable sensor of activity for all types of materials. Traditional applications are based on high-rate captures (usually greater than 10 frames-per-second, fps). Even for drying processes in conservation treatments, where there is a high level of activity in the first moments after the application and slower activity after some minutes or hours, the process is based on the acquisition of images at a time rate that is the same in moments of high and low activity. In this work, we present an alternative approach to track the drying process of protective layers and other painting conservation processes that take a long time to reduce their levels of activity. We illuminate, using three different wavelength lasers, a temporary protector (cyclododecane) and a varnish, and monitor them using a low fps rate during long-term drying. The results are compared to the traditional method. This work also presents a monitoring method that uses portable equipment. The results present the feasibility of using the portable device and show the improved sensitivity of the dynamic laser speckle when sensing the long-term process for drying cyclododecane and varnish in conservation. PMID:29324692
Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images
NASA Astrophysics Data System (ADS)
Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow
2018-04-01
Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.
The correlation study of parallel feature extractor and noise reduction approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewi, Deshinta Arrova; Sundararajan, Elankovan; Prabuwono, Anton Satria
2015-05-15
This paper presents literature reviews that show variety of techniques to develop parallel feature extractor and finding its correlation with noise reduction approaches for low light intensity images. Low light intensity images are normally displayed as darker images and low contrast. Without proper handling techniques, those images regularly become evidences of misperception of objects and textures, the incapability to section them. The visual illusions regularly clues to disorientation, user fatigue, poor detection and classification performance of humans and computer algorithms. Noise reduction approaches (NR) therefore is an essential step for other image processing steps such as edge detection, image segmentation,more » image compression, etc. Parallel Feature Extractor (PFE) meant to capture visual contents of images involves partitioning images into segments, detecting image overlaps if any, and controlling distributed and redistributed segments to extract the features. Working on low light intensity images make the PFE face challenges and closely depend on the quality of its pre-processing steps. Some papers have suggested many well established NR as well as PFE strategies however only few resources have suggested or mentioned the correlation between them. This paper reviews best approaches of the NR and the PFE with detailed explanation on the suggested correlation. This finding may suggest relevant strategies of the PFE development. With the help of knowledge based reasoning, computational approaches and algorithms, we present the correlation study between the NR and the PFE that can be useful for the development and enhancement of other existing PFE.« less
Progressive cone beam CT dose control in image-guided radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan Hao; Cervino, Laura; Jiang, Steve B.
2013-06-15
Purpose: Cone beam CT (CBCT) in image-guided radiotherapy (IGRT) offers a tremendous advantage for treatment guidance. The associated imaging dose is a clinical concern. One unique feature of CBCT-based IGRT is that the same patient is repeatedly scanned during a treatment course, and the contents of CBCT images at different fractions are similar. The authors propose a progressive dose control (PDC) scheme to utilize this temporal correlation for imaging dose reduction. Methods: A dynamic CBCT scan protocol, as opposed to the static one in the current clinical practice, is proposed to gradually reduce the imaging dose in each treatment fraction.more » The CBCT image from each fraction is processed by a prior-image based nonlocal means (PINLM) module to enhance its quality. The increasing amount of prior information from previous CBCT images prevents degradation of image quality due to the reduced imaging dose. Two proof-of-principle experiments have been conducted using measured phantom data and Monte Carlo simulated patient data with deformation. Results: In the measured phantom case, utilizing a prior image acquired at 0.4 mAs, PINLM is able to improve the image quality of a CBCT acquired at 0.2 mAs by reducing the noise level from 34.95 to 12.45 HU. In the synthetic patient case, acceptable image quality is maintained at four consecutive fractions with gradually decreasing exposure levels of 0.4, 0.1, 0.07, and 0.05 mAs. When compared with the standard low-dose protocol of 0.4 mAs for each fraction, an overall imaging dose reduction of more than 60% is achieved. Conclusions: PINLM-PDC is able to reduce CBCT imaging dose in IGRT utilizing the temporal correlations among the sequence of CBCT images while maintaining the quality.« less
Can spectro-temporal complexity explain the autistic pattern of performance on auditory tasks?
Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter
2006-01-01
To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material (pure tones) and/or low-level operations (detection, labelling, chord disembedding, detection of pitch changes) show a superior level of performance and shorter ERP latencies. In contrast, tasks involving spectrally- and temporally-dynamic material and/or complex operations (evaluation, attention) are poorly performed by autistics, or generate inferior ERP activity or brain activation. Neural complexity required to perform auditory tasks may therefore explain pattern of performance and activation of autistic individuals during auditory tasks.
What you see is what you expect: rapid scene understanding benefits from prior experience.
Greene, Michelle R; Botros, Abraham P; Beck, Diane M; Fei-Fei, Li
2015-05-01
Although we are able to rapidly understand novel scene images, little is known about the mechanisms that support this ability. Theories of optimal coding assert that prior visual experience can be used to ease the computational burden of visual processing. A consequence of this idea is that more probable visual inputs should be facilitated relative to more unlikely stimuli. In three experiments, we compared the perceptions of highly improbable real-world scenes (e.g., an underwater press conference) with common images matched for visual and semantic features. Although the two groups of images could not be distinguished by their low-level visual features, we found profound deficits related to the improbable images: Observers wrote poorer descriptions of these images (Exp. 1), had difficulties classifying the images as unusual (Exp. 2), and even had lower sensitivity to detect these images in noise than to detect their more probable counterparts (Exp. 3). Taken together, these results place a limit on our abilities for rapid scene perception and suggest that perception is facilitated by prior visual experience.
A digital pixel cell for address event representation image convolution processing
NASA Astrophysics Data System (ADS)
Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc
2014-05-15
Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less
Time-resolved polarization imaging by pump-probe (stimulated emission) fluorescence microscopy.
Buehler, C; Dong, C Y; So, P T; French, T; Gratton, E
2000-01-01
We report the application of pump-probe fluorescence microscopy in time-resolved polarization imaging. We derived the equations governing the pump-probe stimulated emission process and characterized the pump and probe laser power levels for signal saturation. Our emphasis is to use this novel methodology to image polarization properties of fluorophores across entire cells. As a feasibility study, we imaged a 15-microm orange latex sphere and found that there is depolarization that is possibly due to energy transfer among fluorescent molecules inside the sphere. We also imaged a mouse fibroblast labeled with CellTracker Orange CMTMR (5-(and-6)-(((4-chloromethyl)benzoyl)amino)tetramethyl-rhodamine). We observed that Orange CMTMR complexed with gluthathione rotates fast, indicating the relatively low fluid-phase viscosity of the cytoplasmic microenvironment as seen by Orange CMTMR. The measured rotational correlation time ranged from approximately 30 to approximately 150 ps. This work demonstrates the effectiveness of stimulated emission measurements in acquiring high-resolution, time-resolved polarization information across the entire cell. PMID:10866979
Obstacle Detection Algorithms for Rotorcraft Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)
2001-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.
Real-time 3D adaptive filtering for portable imaging systems
NASA Astrophysics Data System (ADS)
Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark
2015-03-01
Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.
Volumetric segmentation of range images for printed circuit board inspection
NASA Astrophysics Data System (ADS)
Van Dop, Erik R.; Regtien, Paul P. L.
1996-10-01
Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
Kawahito, Shoji; Seo, Min-Woong
2016-11-06
This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS) technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs). This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC). The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median): 0.29 e - rms ) when compared with the CMS gain of two (2.4 e - rms ), or 16 (1.1 e - rms ).
Kawahito, Shoji; Seo, Min-Woong
2016-01-01
This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS) technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs). This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC). The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median): 0.29 e−rms) when compared with the CMS gain of two (2.4 e−rms), or 16 (1.1 e−rms). PMID:27827972
Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.
Malach, R; Reppas, J B; Benson, R R; Kwong, K K; Jiang, H; Kennedy, W A; Ledden, P J; Brady, T J; Rosen, B R; Tootell, R B
1995-01-01
The stages of integration leading from local feature analysis to object recognition were explored in human visual cortex by using the technique of functional magnetic resonance imaging. Here we report evidence for object-related activation. Such activation was located at the lateral-posterior aspect of the occipital lobe, just abutting the posterior aspect of the motion-sensitive area MT/V5, in a region termed the lateral occipital complex (LO). LO showed preferential activation to images of objects, compared to a wide range of texture patterns. This activation was not caused by a global difference in the Fourier spatial frequency content of objects versus texture images, since object images produced enhanced LO activation compared to textures matched in power spectra but randomized in phase. The preferential activation to objects also could not be explained by different patterns of eye movements: similar levels of activation were observed when subjects fixated on the objects and when they scanned the objects with their eyes. Additional manipulations such as spatial frequency filtering and a 4-fold change in visual size did not affect LO activation. These results suggest that the enhanced responses to objects were not a manifestation of low-level visual processing. A striking demonstration that activity in LO is uniquely correlated to object detectability was produced by the "Lincoln" illusion, in which blurring of objects digitized into large blocks paradoxically increases their recognizability. Such blurring led to significant enhancement of LO activation. Despite the preferential activation to objects, LO did not seem to be involved in the final, "semantic," stages of the recognition process. Thus, objects varying widely in their recognizability (e.g., famous faces, common objects, and unfamiliar three-dimensional abstract sculptures) activated it to a similar degree. These results are thus evidence for an intermediate link in the chain of processing stages leading to object recognition in human visual cortex. Images Fig. 1 Fig. 2 Fig. 3 PMID:7667258
Diether, S; Schaeffel, F
1999-07-01
Experiments in animal models have shown that the retinal analyzes the image to identify the position of the plane of focus and fine-tunes the growth of the underlying sclera. It is fundamental to the understanding of the development of refractive errors to know which image features are processed. Since the position of the image plane fluctuates continuously with accommodative status and viewing distance, a meaningful control of refractive development can only occur by an averaging procedure with a long time constant. As a candidate for a retinal signal for enhanced eye growth and myopia we propose the level of contrast adaptation which varies with the average amount of defocus. Using a behavioural paradigm, we have found in chickens (1) that contrast adaptation (CA, here referred to as an increase in contrast sensitivity) occurs at low spatial frequencies (0.2 cyc/deg) already after 1.5 h of wearing frosted goggles which cause deprivation myopia, (2) that CA also occurs with negative lenses (-7.4D) and positive lenses (+6.9D) after 1.5 h, at least if accommodation is paralyzed and, (3) that CA occurs at a retinal level or has, at least, a retinal component. Furthermore, we have studied the effects of atropine and reserpine, which both suppress myopia development, on CA. Quisqualate, which causes retinal degeneration but leaves emmetropization functional, was also tested. We found that both atropine and reserpine increase contrast sensitivity to a level where no further CA could be induced by frosted goggles. Quisqualate increased only the variability of refractive development and of contrast sensitivity. Taken together, CA occurring during extended periods of defocus is a possible candidate for a retinal error signal for myopia development. However, the situation is complicated by the fact that there must be a second image processing mode generating a powerful inhibitory growth signal if the image is in front of the retina, even with poor images (Diether, S., & Schaeffel, F. (1999).
Low-Light Image Enhancement Using Adaptive Digital Pixel Binning
Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki
2015-01-01
This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609
Liu, Qi; Tawackoli, Wafa; Pelled, Gadi; Fan, Zhaoyang; Jin, Ning; Natsuaki, Yutaka; Bi, Xiaoming; Gart, Avrom; Bae, Hyun; Gazit, Dan; Li, Debiao
2015-03-01
Low pH is associated with intervertebral disc (IVD)-generated low back pain (LBP). The purpose of this work was to develop an in vivo pH level-dependent magnetic resonance imaging (MRI) method for detecting discogenic LBP, without using exogenous contrast agents. The ratio of R1ρ dispersion and chemical exchange saturation transfer (CEST) (RROC) was used for pH-level dependent imaging of the IVD while eliminating the effect of labile proton concentration. The technique was validated by numerical simulations and studies on phantoms and ex vivo porcine spines. Four male (ages 42.8 ± 18.3) and two female patients (ages 55.5 ± 2.1) with LBP and scheduled for discography were examined with the method on a 3.0 Tesla MR scanner. RROC measurements were compared with discography outcomes using paired t-test. Simulation and phantom results indicated RROC is a concentration independent and pH level-dependent technique. Porcine spine study results found higher RROC value was related to lower pH level. Painful discs based on discography had significant higher RROC values than those with negative diagnosis (P < 0.05). RROC imaging is a promising pH level dependent MRI technique that has the potential to be a noninvasive imaging tool to detect painful IVDs in vivo. © 2014 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
2012-01-01
Topics include: Computational Ghost Imaging for Remote Sensing; Digital Architecture for a Trace Gas Sensor Platform; Dispersed Fringe Sensing Analysis - DFSA; Indium Tin Oxide Resistor-Based Nitric Oxide Microsensors; Gas Composition Sensing Using Carbon Nanotube Arrays; Sensor for Boundary Shear Stress in Fluid Flow; Model-Based Method for Sensor Validation; Qualification of Engineering Camera for Long-Duration Deep Space Missions; Remotely Powered Reconfigurable Receiver for Extreme Environment Sensing Platforms; Bump Bonding Using Metal-Coated Carbon Nanotubes; In Situ Mosaic Brightness Correction; Simplex GPS and InSAR Inversion Software; Virtual Machine Language 2.1; Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction; Pandora Operation and Analysis Software; Fabrication of a Cryogenic Bias Filter for Ultrasensitive Focal Plane; Processing of Nanosensors Using a Sacrificial Template Approach; High-Temperature Shape Memory Polymers; Modular Flooring System; Non-Toxic, Low-Freezing, Drop-In Replacement Heat Transfer Fluids; Materials That Enhance Efficiency and Radiation Resistance of Solar Cells; Low-Cost, Rugged High-Vacuum System; Static Gas-Charging Plug; Floating Oil-Spill Containment Device; Stemless Ball Valve; Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs; Oxygen-Methane Thruster; Lunar Navigation Determination System - LaNDS; Launch Method for Kites in Low-Wind or No-Wind Conditions; Supercritical CO2 Cleaning System for Planetary Protection and Contamination Control Applications; Design and Performance of a Wideband Radio Telescope; Finite Element Models for Electron Beam Freeform Fabrication Process Autonomous Information Unit for Fine-Grain Data Access Control and Information Protection in a Net-Centric System; Vehicle Detection for RCTA/ANS (Autonomous Navigation System); Image Mapping and Visual Attention on the Sensory Ego-Sphere; HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis; and IMAGESEER - IMAGEs for Education and Research.
Semantic classification of business images
NASA Astrophysics Data System (ADS)
Erol, Berna; Hull, Jonathan J.
2006-01-01
Digital cameras are becoming increasingly common for capturing information in business settings. In this paper, we describe a novel method for classifying images into the following semantic classes: document, whiteboard, business card, slide, and regular images. Our method is based on combining low-level image features, such as text color, layout, and handwriting features with high-level OCR output analysis. Several Support Vector Machine Classifiers are combined for multi-class classification of input images. The system yields 95% accuracy in classification.
Wilming, Niklas; Kietzmann, Tim C; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A; König, Peter
2017-01-01
Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. © The Author 2017. Published by Oxford University Press.
Wilming, Niklas; Kietzmann, Tim C.; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A.; König, Peter
2017-01-01
Abstract Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. PMID:28077512
Efficient fuzzy C-means architecture for image segmentation.
Li, Hui-Ya; Hwang, Wen-Jyi; Chang, Chia-Yen
2011-01-01
This paper presents a novel VLSI architecture for image segmentation. The architecture is based on the fuzzy c-means algorithm with spatial constraint for reducing the misclassification rate. In the architecture, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. In addition, an efficient pipelined circuit is used for the updating process for accelerating the computational speed. Experimental results show that the the proposed circuit is an effective alternative for real-time image segmentation with low area cost and low misclassification rate.
Bühnemann, Claudia; Li, Simon; Yu, Haiyue; Branford White, Harriet; Schäfer, Karl L; Llombart-Bosch, Antonio; Machado, Isidro; Picci, Piero; Hogendoorn, Pancras C W; Athanasou, Nicholas A; Noble, J Alison; Hassan, A Bassim
2014-01-01
Driven by genomic somatic variation, tumour tissues are typically heterogeneous, yet unbiased quantitative methods are rarely used to analyse heterogeneity at the protein level. Motivated by this problem, we developed automated image segmentation of images of multiple biomarkers in Ewing sarcoma to generate distributions of biomarkers between and within tumour cells. We further integrate high dimensional data with patient clinical outcomes utilising random survival forest (RSF) machine learning. Using material from cohorts of genetically diagnosed Ewing sarcoma with EWSR1 chromosomal translocations, confocal images of tissue microarrays were segmented with level sets and watershed algorithms. Each cell nucleus and cytoplasm were identified in relation to DAPI and CD99, respectively, and protein biomarkers (e.g. Ki67, pS6, Foxo3a, EGR1, MAPK) localised relative to nuclear and cytoplasmic regions of each cell in order to generate image feature distributions. The image distribution features were analysed with RSF in relation to known overall patient survival from three separate cohorts (185 informative cases). Variation in pre-analytical processing resulted in elimination of a high number of non-informative images that had poor DAPI localisation or biomarker preservation (67 cases, 36%). The distribution of image features for biomarkers in the remaining high quality material (118 cases, 104 features per case) were analysed by RSF with feature selection, and performance assessed using internal cross-validation, rather than a separate validation cohort. A prognostic classifier for Ewing sarcoma with low cross-validation error rates (0.36) was comprised of multiple features, including the Ki67 proliferative marker and a sub-population of cells with low cytoplasmic/nuclear ratio of CD99. Through elimination of bias, the evaluation of high-dimensionality biomarker distribution within cell populations of a tumour using random forest analysis in quality controlled tumour material could be achieved. Such an automated and integrated methodology has potential application in the identification of prognostic classifiers based on tumour cell heterogeneity.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.
2011-01-01
Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562
Superiority of terahertz over infrared transmission through bandages and burn wound ointments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suen, Jonathan Y., E-mail: j.suen@duke.edu; Padilla, Willie J.
Terahertz electromagnetic waves have long been proposed to be ideal for spectroscopy and imaging through non-polar dielectric materials that contain no water. Terahertz radiation may thus be useful for monitoring burn and wound injury recovery, as common care treatments involve application of both a clinical dressing and topical ointment. Here, we investigate the optical properties of typical care treatments in the millimeter wave (150–300 GHz), terahertz (0.3–3 THz), and infrared (14.5–0.67 μm) ranges of the electromagnetic spectrum. We find that THz radiation realizes low absorption coefficients and high levels of transmission compared to infrared wavelengths, which were strongly attenuated. Terahertz imaging canmore » enable safe, non-ionizing, noninvasive monitoring of the healing process directly through clinical dressings and recovery ointments, minimizing the frequency of dressing changes and thus increasing the rate of the healing process.« less
Metal artifact reduction using a patch-based reconstruction for digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Borges, Lucas R.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2017-03-01
Digital breast tomosynthesis (DBT) is rapidly emerging as the main clinical tool for breast cancer screening. Although several reconstruction methods for DBT are described by the literature, one common issue is the interplane artifacts caused by out-of-focus features. For breasts containing highly attenuating features, such as surgical clips and large calcifications, the artifacts are even more apparent and can limit the detection and characterization of lesions by the radiologist. In this work, we propose a novel method of combining backprojected data into tomographic slices using a patch-based approach, commonly used in denoising. Preliminary tests were performed on a geometry phantom and on an anthropomorphic phantom containing metal inserts. The reconstructed images were compared to a commercial reconstruction solution. Qualitative assessment of the reconstructed images provides evidence that the proposed method reduces artifacts while maintaining low noise levels. Objective assessment supports the visual findings. The artifact spread function shows that the proposed method is capable of suppressing artifacts generated by highly attenuating features. The signal difference to noise ratio shows that the noise levels of the proposed and commercial methods are comparable, even though the commercial method applies post-processing filtering steps, which were not implemented on the proposed method. Thus, the proposed method can produce tomosynthesis reconstructions with reduced artifacts and low noise levels.
Single-cell imaging tools for brain energy metabolism: a review
San Martín, Alejandro; Sotelo-Hitschfeld, Tamara; Lerchundi, Rodrigo; Fernández-Moncada, Ignacio; Ceballo, Sebastian; Valdebenito, Rocío; Baeza-Lehnert, Felipe; Alegría, Karin; Contreras-Baeza, Yasna; Garrido-Gerter, Pamela; Romero-Gómez, Ignacio; Barros, L. Felipe
2014-01-01
Abstract. Neurophotonics comes to light at a time in which advances in microscopy and improved calcium reporters are paving the way toward high-resolution functional mapping of the brain. This review relates to a parallel revolution in metabolism. We argue that metabolism needs to be approached both in vitro and in vivo, and that it does not just exist as a low-level platform but is also a relevant player in information processing. In recent years, genetically encoded fluorescent nanosensors have been introduced to measure glucose, glutamate, ATP, NADH, lactate, and pyruvate in mammalian cells. Reporting relative metabolite levels, absolute concentrations, and metabolic fluxes, these sensors are instrumental for the discovery of new molecular mechanisms. Sensors continue to be developed, which together with a continued improvement in protein expression strategies and new imaging technologies, herald an exciting era of high-resolution characterization of metabolism in the brain and other organs. PMID:26157964
NASA Astrophysics Data System (ADS)
Chen, Tsing-Chang; Yen, Ming-Cheng; Wu, Kuang-Der; Ng, Thomas
1992-08-01
The time evolution of the Indian monsoon is closely related to locations of the northward migrating monsoon troughs and ridges which can be well depicted with the 30 60day filtered 850-mb streamfunction. Thus, long-range forecasts of the large-scale low-level monsoon can be obtained from those of the filtered 850-mb streamfunction. These long-range forecasts were made in this study in terms of the Auto Regressive (AR) Moving-Average process. The historical series of the AR model were constructed with the 30 60day filtered 850-mb streamfunction [˜ψ (850mb)] time series of 4months. However, the phase of the last low-frequency cycle in the ˜ψ (850mb) time series can be skewed by the bandpass filtering. To reduce this phase skewness, a simple scheme is introduced. With this phase modification of the filtered 850-mb streamfunction, we performed the pilot forecast experiments of three summers with the AR forecast process. The forecast errors in the positions of the northward propagating monsoon troughs and ridges at Day 20 are generally within the range of 1
2days behind the observed, except in some extreme cases.
Analysis of x-ray tomography data of an extruded low density styrenic foam: an image analysis study
NASA Astrophysics Data System (ADS)
Lin, Jui-Ching; Heeschen, William
2016-10-01
Extruded styrenic foams are low density foams that are widely used for thermal insulation. It is difficult to precisely characterize the structure of the cells in low density foams by traditional cross-section viewing due to the frailty of the walls of the cells. X-ray computed tomography (CT) is a non-destructive, three dimensional structure characterization technique that has great potential for structure characterization of styrenic foams. Unfortunately the intrinsic artifacts of the data and the artifacts generated during image reconstruction are often comparable in size and shape to the thin walls of the foam, making robust and reliable analysis of cell sizes challenging. We explored three different image processing methods to clean up artifacts in the reconstructed images, thus allowing quantitative three dimensional determination of cell size in a low density styrenic foam. Three image processing approaches - an intensity based approach, an intensity variance based approach, and a machine learning based approach - are explored in this study, and the machine learning image feature classification method was shown to be the best. Individual cells are segmented within the images after the images were cleaned up using the three different methods and the cell sizes are measured and compared in the study. Although the collected data with the image analysis methods together did not yield enough measurements for a good statistic of the measurement of cell sizes, the problem can be resolved by measuring multiple samples or increasing imaging field of view.
Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen
2018-04-26
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Hierarchical layered and semantic-based image segmentation using ergodicity map
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing
2010-04-01
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.
InGaAs focal plane arrays for low-light-level SWIR imaging
NASA Astrophysics Data System (ADS)
MacDougal, Michael; Hood, Andrew; Geske, Jon; Wang, Jim; Patel, Falgun; Follman, David; Manzo, Juan; Getty, Jonathan
2011-06-01
Aerius Photonics will present their latest developments in large InGaAs focal plane arrays, which are used for low light level imaging in the short wavelength infrared (SWIR) regime. Aerius will present imaging in both 1280x1024 and 640x512 formats. Aerius will present characterization of the FPA including dark current measurements. Aerius will also show the results of development of SWIR FPAs for high temperaures, including imagery and dark current data. Finally, Aerius will show results of using the SWIR camera with Aerius' SWIR illuminators using VCSEL technology.
Spot restoration for GPR image post-processing
Paglieroni, David W; Beer, N. Reginald
2014-05-20
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
NASA Astrophysics Data System (ADS)
Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.
2008-03-01
An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.
MToS: A Tree of Shapes for Multivariate Images.
Carlinet, Edwin; Géraud, Thierry
2015-12-01
The topographic map of a gray-level image, also called tree of shapes, provides a high-level hierarchical representation of the image contents. This representation, invariant to contrast changes and to contrast inversion, has been proved very useful to achieve many image processing and pattern recognition tasks. Its definition relies on the total ordering of pixel values, so this representation does not exist for color images, or more generally, multivariate images. Common workarounds, such as marginal processing, or imposing a total order on data, are not satisfactory and yield many problems. This paper presents a method to build a tree-based representation of multivariate images, which features marginally the same properties of the gray-level tree of shapes. Briefly put, we do not impose an arbitrary ordering on values, but we only rely on the inclusion relationship between shapes in the image definition domain. The interest of having a contrast invariant and self-dual representation of multivariate image is illustrated through several applications (filtering, segmentation, and object recognition) on different types of data: color natural images, document images, satellite hyperspectral imaging, multimodal medical imaging, and videos.
Face recognition system for set-top box-based intelligent TV.
Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung
2014-11-18
Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.
Dual-energy digital mammography for calcification imaging: scatter and nonuniformity corrections.
Kappadath, S Cheenu; Shaw, Chris C
2005-11-01
Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 microm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 microm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 microm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than approximately 250 microm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.
Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kappadath, S. Cheenu; Shaw, Chris C.
Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DEmore » calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 {mu}m) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 {mu}m size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 {mu}m size range when the visibility criteria were lowered to barely visible. Calcifications smaller than {approx}250 {mu}m were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.« less
Enhancement of structure images of interstellar diamond microcrystals by image processing
NASA Technical Reports Server (NTRS)
O'Keefe, Michael A.; Hetherington, Crispin; Turner, John; Blake, David; Freund, Friedemann
1988-01-01
Image processed high resolution TEM images of diamond crystals found in oxidized acid residues of carbonaceous chondrites are presented. Two models of the origin of the diamonds are discussed. The model proposed by Lewis et al. (1987) supposes that the diamonds formed under low pressure conditions, whereas that of Blake et al (1988) suggests that the diamonds formed due to particle-particle collisions behind supernova shock waves. The TEM images of the diamond presented support the high pressure model.
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Gong, S
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
Java Library for Input and Output of Image Data and Metadata
NASA Technical Reports Server (NTRS)
Deen, Robert; Levoe, Steven
2003-01-01
A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.