Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
Field-Portable Pixel Super-Resolution Colour Microscope
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742
Field-portable pixel super-resolution colour microscope.
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.
Portable and cost-effective pixel super-resolution on-chip microscope for telemedicine applications.
Bishara, Waheb; Sikora, Uzair; Mudanyali, Onur; Su, Ting-Wei; Yaglidere, Oguzhan; Luckhart, Shirley; Ozcan, Aydogan
2011-01-01
We report a field-portable lensless on-chip microscope with a lateral resolution of <1 μm and a large field-of-view of ~24 mm(2). This microscope is based on digital in-line holography and a pixel super-resolution algorithm to process multiple lensfree holograms and obtain a single high-resolution hologram. In its compact and cost-effective design, we utilize 23 light emitting diodes butt-coupled to 23 multi-mode optical fibers, and a simple optical filter, with no moving parts. Weighing only ~95 grams, we demonstrate the performance of this field-portable microscope by imaging various objects including human malaria parasites in thin blood smears.
Dynamic full-field infrared imaging with multiple synchrotron beams
Stavitski, Eli; Smith, Randy J.; Bourassa, Megan W.; Acerbo, Alvin S.; Carr, G. L.; Miller, Lisa M.
2013-01-01
Microspectroscopic imaging in the infrared (IR) spectral region allows for the examination of spatially resolved chemical composition on the microscale. More than a decade ago, it was demonstrated that diffraction limited spatial resolution can be achieved when an apertured, single pixel IR microscope is coupled to the high brightness of a synchrotron light source. Nowadays, many IR microscopes are equipped with multi-pixel Focal Plane Array (FPA) detectors, which dramatically improve data acquisition times for imaging large areas. Recently, progress been made toward efficiently coupling synchrotron IR beamlines to multi-pixel detectors, but they utilize expensive and highly customized optical schemes. Here we demonstrate the development and application of a simple optical configuration that can be implemented on most existing synchrotron IR beamlines in order to achieve full-field IR imaging with diffraction-limited spatial resolution. Specifically, the synchrotron radiation fan is extracted from the bending magnet and split into four beams that are combined on the sample, allowing it to fill a large section of the FPA. With this optical configuration, we are able to oversample an image by more than a factor of two, even at the shortest wavelengths, making image restoration through deconvolution algorithms possible. High chemical sensitivity, rapid acquisition times, and superior signal-to-noise characteristics of the instrument are demonstrated. The unique characteristics of this setup enabled the real time study of heterogeneous chemical dynamics with diffraction-limited spatial resolution for the first time. PMID:23458231
A multi-focus image fusion method via region mosaicking on Laplacian pyramids
Kou, Liang; Zhang, Liguo; Sun, Jianguo; Han, Qilong; Jin, Zilong
2018-01-01
In this paper, a method named Region Mosaicking on Laplacian Pyramids (RMLP) is proposed to fuse multi-focus images that is captured by microscope. First, the Sum-Modified-Laplacian is applied to measure the focus of multi-focus images. Then the density-based region growing algorithm is utilized to segment the focused region mask of each image. Finally, the mask is decomposed into a mask pyramid to supervise region mosaicking on a Laplacian pyramid. The region level pyramid keeps more original information than the pixel level. The experiment results show that RMLP has best performance in quantitative comparison with other methods. In addition, RMLP is insensitive to noise and can reduces the color distortion of the fused images on two datasets. PMID:29771912
Rapid biodiagnostic ex vivo imaging at 1 μm pixel resolution with thermal source FTIR FPA.
Findlay, C R; Wiens, R; Rak, M; Sedlmair, J; Hirschmugl, C J; Morrison, Jason; Mundy, C J; Kansiz, M; Gough, K M
2015-04-07
A recent upgrade to the optics configuration of a thermal source FTIR microscope equipped with a focal plane array detector has enabled rapid acquisition of high magnification spectrochemical images, in transmission, with an effective geometric pixel size of ∼1 × 1 μm(2) at the sample plane. Examples, including standard imaging targets for scale and accuracy, as well as biomedical tissues and microorganisms, have been imaged with the new system and contrasted with data acquired at normal magnification and with a high magnification multi-beam synchrotron instrument. With this optics upgrade, one can now conduct rapid biodiagnostic ex vivo tissue imaging in-house, with images collected over larger areas, in less time (minutes) and with comparable quality and resolution to the best synchrotron source FTIR imaging capabilities.
Dynamically re-configurable CMOS imagers for an active vision system
NASA Technical Reports Server (NTRS)
Yang, Guang (Inventor); Pain, Bedabrata (Inventor)
2005-01-01
A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.
Imaging properties and its improvements of scanning/imaging x-ray microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeuchi, Akihisa, E-mail: take@spring8.or.jp; Uesugi, Kentaro; Suzuki, Yoshio
A scanning / imaging X-ray microscope (SIXM) system has been developed at SPring-8. The SIXM consists of a scanning X-ray microscope with a one-dimensional (1D) X-ray focusing device and an imaging (full-field) X-ray microscope with a 1D X-ray objective. The motivation of the SIXM system is to realize a quantitative and highly-sensitive multimodal 3D X-ray tomography by taking advantages of both the scanning X-ray microscope using multi-pixel detector and the imaging X-ray microscope. Data acquisition process of a 2D image is completely different between in the horizontal direction and in the vertical direction; a 1D signal is obtained with themore » linear-scanning while the other dimensional signal is obtained with the imaging optics. Such condition have caused a serious problem on the imaging properties that the imaging quality in the vertical direction has been much worse than that in the horizontal direction. In this paper, two approaches to solve this problem will be presented. One is introducing a Fourier transform method for phase retrieval from one phase derivative image, and the other to develop and employ a 1D diffuser to produce an asymmetrical coherent illumination.« less
Pollen Image Recognition Based on DGDB-LBP Descriptor
NASA Astrophysics Data System (ADS)
Han, L. P.; Xie, Y. H.
2018-01-01
In this paper, we propose DGDB-LBP, a local binary pattern descriptor based on the pixel blocks in the dominant gradient direction. Differing from traditional LBP and its variants, DGDB-LBP encodes by comparing the main gradient magnitude of each block rather than the single pixel value or the average of pixel blocks, in doing so, it reduces the influence of noise on pollen images and eliminates redundant and non-informative features. In order to fully describe the texture features of pollen images and analyze it under multi-scales, we propose a new sampling strategy, which uses three types of operators to extract the radial, angular and multiple texture features under different scales. Considering that the pollen images have some degree of rotation under the microscope, we propose the adaptive encoding direction, which is determined by the texture distribution of local region. Experimental results on the Pollenmonitor dataset show that the average correct recognition rate of our method is superior to other pollen recognition methods in recent years.
Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2009-01-01
Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue) the minimum, maximum, average, and standard deviation within the particle are tracked. These metrics can be used for autonomous analysis of color images from a microscope, video camera, or digital, still image. It can also automatically identify tumor morphology of stained images and has been used to detect stained cell phenomena (see figure).
Microscope mode secondary ion mass spectrometry imaging with a Timepix detector.
Kiss, Andras; Jungmann, Julia H; Smith, Donald F; Heeren, Ron M A
2013-01-01
In-vacuum active pixel detectors enable high sensitivity, highly parallel time- and space-resolved detection of ions from complex surfaces. For the first time, a Timepix detector assembly was combined with a secondary ion mass spectrometer for microscope mode secondary ion mass spectrometry (SIMS) imaging. Time resolved images from various benchmark samples demonstrate the imaging capabilities of the detector system. The main advantages of the active pixel detector are the higher signal-to-noise ratio and parallel acquisition of arrival time and position. Microscope mode SIMS imaging of biomolecules is demonstrated from tissue sections with the Timepix detector.
Ah Lee, Seung; Ou, Xiaoze; Lee, J Eugene; Yang, Changhuei
2013-06-01
We demonstrate a silo-filter (SF) complementary metal-oxide semiconductor (CMOS) image sensor for a chip-scale fluorescence microscope. The extruded pixel design with metal walls between neighboring pixels guides fluorescence emission through the thick absorptive filter to the photodiode of a pixel. Our prototype device achieves 13 μm resolution over a wide field of view (4.8 mm × 4.4 mm). We demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.
Multi-spectral confocal microendoscope for in-vivo imaging
NASA Astrophysics Data System (ADS)
Rouse, Andrew Robert
The concept of in-vivo multi-spectral confocal microscopy is introduced. A slit-scanning multi-spectral confocal microendoscope (MCME) was built to demonstrate the technique. The MCME employs a flexible fiber-optic catheter coupled to a custom built slit-scan confocal microscope fitted with a custom built imaging spectrometer. The catheter consists of a fiber-optic imaging bundle linked to a miniature objective and focus assembly. The design and performance of the miniature objective and focus assembly are discussed. The 3mm diameter catheter may be used on its own or routed though the instrument channel of a commercial endoscope. The confocal nature of the system provides optical sectioning with 3mum lateral resolution and 30mum axial resolution. The prism based multi-spectral detection assembly is typically configured to collect 30 spectral samples over the visible chromatic range. The spectral sampling rate varies from 4nm/pixel at 490nm to 8nm/pixel at 660nm and the minimum resolvable wavelength difference varies from 7nm to 18nm over the same spectral range. Each of these characteristics are primarily dictated by the dispersive power of the prism. The MCME is designed to examine cellular structures during optical biopsy and to exploit the diagnostic information contained within the spectral domain. The primary applications for the system include diagnosis of disease in the gastro-intestinal tract and female reproductive system. Recent data from the grayscale imaging mode are presented. Preliminary multi-spectral results from phantoms, cell cultures, and excised human tissue are presented to demonstrate the potential of in-vivo multi-spectral imaging.
A new algorithm to reduce noise in microscopy images implemented with a simple program in python.
Papini, Alessio
2012-03-01
All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.
Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan
2015-11-01
To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Isono, Hiroshi; Hirata, Shinnosuke; Hachiya, Hiroyuki
2015-07-01
In medical ultrasonic images of liver disease, a texture with a speckle pattern indicates a microscopic structure such as nodules surrounded by fibrous tissues in hepatitis or cirrhosis. We have been applying texture analysis based on a co-occurrence matrix to ultrasonic images of fibrotic liver for quantitative tissue characterization. A co-occurrence matrix consists of the probability distribution of brightness of pixel pairs specified with spatial parameters and gives new information on liver disease. Ultrasonic images of different types of fibrotic liver were simulated and the texture-feature contrast was calculated to quantify the co-occurrence matrices generated from the images. The results show that the contrast converges with a value that can be theoretically estimated using a multi-Rayleigh model of echo signal amplitude distribution. We also found that the contrast value increases as liver fibrosis progresses and fluctuates depending on the size of fibrotic structure.
Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy.
Zhang, Jialin; Sun, Jiasong; Chen, Qian; Li, Jiaji; Zuo, Chao
2017-09-18
High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm 2 ) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
Coltelli, Primo; Barsanti, Laura; Evangelista, Valter; Frassanito, Anna Maria; Gualtieri, Paolo
2016-12-01
A novel procedure for deriving the absorption spectrum of an object spot from the colour values of the corresponding pixel(s) in its image is presented. Any digital image acquired by a microscope can be used; typical applications are the analysis of cellular/subcellular metabolic processes under physiological conditions and in response to environmental stressors (e.g. heavy metals), and the measurement of chromophore composition, distribution and concentration in cells. In this paper, we challenged the procedure with images of algae, acquired by means of a CCD camera mounted onto a microscope. The many colours algae display result from the combinations of chromophores whose spectroscopic information is limited to organic solvents extracts that suffers from displacements, amplifications, and contraction/dilatation respect to spectra recorded inside the cell. Hence, preliminary processing is necessary, which consists of in vivo measurement of the absorption spectra of photosynthetic compartments of algal cells and determination of spectra of the single chromophores inside the cell. The final step of the procedure consists in the reconstruction of the absorption spectrum of the cell spot from the colour values of the corresponding pixel(s) in its digital image by minimization of a system of transcendental equations based on the absorption spectra of the chromophores under physiological conditions. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Motion immune diffusion imaging using augmented MUSE (AMUSE) for high-resolution multi-shot EPI
Guhaniyogi, Shayan; Chu, Mei-Lan; Chang, Hing-Chiu; Song, Allen W.; Chen, Nan-kuei
2015-01-01
Purpose To develop new techniques for reducing the effects of microscopic and macroscopic patient motion in diffusion imaging acquired with high-resolution multi-shot EPI. Theory The previously reported Multiplexed Sensitivity Encoding (MUSE) algorithm is extended to account for macroscopic pixel misregistrations as well as motion-induced phase errors in a technique called Augmented MUSE (AMUSE). Furthermore, to obtain more accurate quantitative DTI measures in the presence of subject motion, we also account for the altered diffusion encoding among shots arising from macroscopic motion. Methods MUSE and AMUSE were evaluated on simulated and in vivo motion-corrupted multi-shot diffusion data. Evaluations were made both on the resulting imaging quality and estimated diffusion tensor metrics. Results AMUSE was found to reduce image blurring resulting from macroscopic subject motion compared to MUSE, but yielded inaccurate tensor estimations when neglecting the altered diffusion encoding. Including the altered diffusion encoding in AMUSE produced better estimations of diffusion tensors. Conclusion The use of AMUSE allows for improved image quality and diffusion tensor accuracy in the presence of macroscopic subject motion during multi-shot diffusion imaging. These techniques should facilitate future high-resolution diffusion imaging. PMID:25762216
Biological tissue imaging with a position and time sensitive pixelated detector.
Jungmann, Julia H; Smith, Donald F; MacAleese, Luke; Klinkert, Ivo; Visser, Jan; Heeren, Ron M A
2012-10-01
We demonstrate the capabilities of a highly parallel, active pixel detector for large-area, mass spectrometric imaging of biological tissue sections. A bare Timepix assembly (512 × 512 pixels) is combined with chevron microchannel plates on an ion microscope matrix-assisted laser desorption time-of-flight mass spectrometer (MALDI TOF-MS). The detector assembly registers position- and time-resolved images of multiple m/z species in every measurement frame. We prove the applicability of the detection system to biomolecular mass spectrometry imaging on biologically relevant samples by mass-resolved images from Timepix measurements of a peptide-grid benchmark sample and mouse testis tissue slices. Mass-spectral and localization information of analytes at physiologic concentrations are measured in MALDI-TOF-MS imaging experiments. We show a high spatial resolution (pixel size down to 740 × 740 nm(2) on the sample surface) and a spatial resolving power of 6 μm with a microscope mode laser field of view of 100-335 μm. Automated, large-area imaging is demonstrated and the Timepix' potential for fast, large-area image acquisition is highlighted.
NASA Astrophysics Data System (ADS)
Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao
2018-01-01
Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
Scanning Microscopes Using X Rays and Microchannels
NASA Technical Reports Server (NTRS)
Wang, Yu
2003-01-01
Scanning microscopes that would be based on microchannel filters and advanced electronic image sensors and that utilize x-ray illumination have been proposed. Because the finest resolution attainable in a microscope is determined by the wavelength of the illumination, the xray illumination in the proposed microscopes would make it possible, in principle, to achieve resolutions of the order of nanometers about a thousand times as fine as the resolution of a visible-light microscope. Heretofore, it has been necessary to use scanning electron microscopes to obtain such fine resolution. In comparison with scanning electron microscopes, the proposed microscopes would likely be smaller, less massive, and less expensive. Moreover, unlike in scanning electron microscopes, it would not be necessary to place specimens under vacuum. The proposed microscopes are closely related to the ones described in several prior NASA Tech Briefs articles; namely, Miniature Microscope Without Lenses (NPO-20218), NASA Tech Briefs, Vol. 22, No. 8 (August 1998), page 43; and Reflective Variants of Miniature Microscope Without Lenses (NPO-20610), NASA Tech Briefs, Vol. 26, No. 9 (September 2002) page 6a. In all of these microscopes, the basic principle of design and operation is the same: The focusing optics of a conventional visible-light microscope are replaced by a combination of a microchannel filter and a charge-coupled-device (CCD) image detector. A microchannel plate containing parallel, microscopic-cross-section holes much longer than they are wide is placed between a specimen and an image sensor, which is typically the CCD. The microchannel plate must be made of a material that absorbs the illuminating radiation reflected or scattered from the specimen. The microchannels must be positioned and dimensioned so that each one is registered with a pixel on the image sensor. Because most of the radiation incident on the microchannel walls becomes absorbed, the radiation that reaches the image sensor consists predominantly of radiation that was launched along the longitudinal direction of the microchannels. Therefore, most of the radiation arriving at each pixel on the sensor must have traveled along a straight line from a corresponding location on the specimen. Thus, there is a one-to-one mapping from a point on a specimen to a pixel in the image sensor, so that the output of the image sensor contains image information equivalent to that from a microscope.
Multimodal microscopy and the stepwise multi-photon activation fluorescence of melanin
NASA Astrophysics Data System (ADS)
Lai, Zhenhua
The author's work is divided into three aspects: multimodal microscopy, stepwise multi-photon activation fluorescence (SMPAF) of melanin, and customized-profile lenses (CPL) for on-axis laser scanners, which will be introduced respectively. A multimodal microscope provides the ability to image samples with multiple modalities on the same stage, which incorporates the benefits of all modalities. The multimodal microscopes developed in this dissertation are the Keck 3D fusion multimodal microscope 2.0 (3DFM 2.0), upgraded from the old 3DFM with improved performance and flexibility, and the multimodal microscope for targeting small particles (the "Target" system). The control systems developed for both microscopes are low-cost and easy-to-build, with all components off-the-shelf. The control system have not only significantly decreased the complexity and size of the microscope, but also increased the pixel resolution and flexibility. The SMPAF of melanin, activated by a continuous-wave (CW) mode near-infrared (NIR) laser, has potential applications for a low-cost and reliable method of detecting melanin. The photophysics of melanin SMPAF has been studied by theoretical analysis of the excitation process and investigation of the spectra, activation threshold, and photon number absorption of melanin SMPAF. SMPAF images of melanin in mouse hair and skin, mouse melanoma, and human black and white hairs are compared with images taken by conventional multi-photon fluorescence microscopy (MPFM) and confocal reflectance microscopy (CRM). SMPAF images significantly increase specificity and demonstrate the potential to increase sensitivity for melanin detection compared to MPFM images and CRM images. Employing melanin SMPAF imaging to detect melanin inside human skin in vivo has been demonstrated, which proves the effectiveness of melanin detection using SMPAF for medical purposes. Selective melanin ablation with micrometer resolution has been presented using the Target system. Compared to the traditional selective photothermolysis, this method demonstrates higher precision, higher specificity and deeper penetration. Therefore, the SMPAF guided selective ablation of melanin is a promising tool of removing melanin for both medical and cosmetic purposes. Three CPLs have been designed for low-cost linear-motion scanners, low-cost fast spinning scanners and high-precision fast spinning scanners. Each design has been tailored to the industrial manufacturing ability and market demands.
Super-pixel extraction based on multi-channel pulse coupled neural network
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.
Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2014-01-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010
Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2014-09-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.
Brodusch, Nicolas; Demers, Hendrix; Gauvin, Raynald
2015-01-01
Dark-field (DF) images were acquired in the scanning electron microscope with an offline procedure based on electron backscatter diffraction (EBSD) patterns (EBSPs). These EBSD-DF images were generated by selecting a particular reflection on the electron backscatter diffraction pattern and by reporting the intensity of one or several pixels around this point at each pixel of the EBSD-DF image. Unlike previous studies, the diffraction information of the sample is the basis of the final image contrast with a pixel scale resolution at the EBSP providing DF imaging in the scanning electron microscope. The offline facility of this technique permits the selection of any diffraction condition available in the diffraction pattern and displaying the corresponding image. The high number of diffraction-based images available allows a better monitoring of deformation structures compared to electron channeling contrast imaging (ECCI) which is generally limited to a few images of the same area. This technique was applied to steel and iron specimens and showed its high capability in describing more rigorously the deformation structures around micro-hardness indents. Due to the offline relation between the reference EBSP and the EBSD-DF images, this new technique will undoubtedly greatly improve our knowledge of deformation mechanism and help to improve our understanding of the ECCI contrast mechanisms. Copyright © 2014 Elsevier B.V. All rights reserved.
Brain vascular image segmentation based on fuzzy local information C-means clustering
NASA Astrophysics Data System (ADS)
Hu, Chaoen; Liu, Xia; Liang, Xiao; Hui, Hui; Yang, Xin; Tian, Jie
2017-02-01
Light sheet fluorescence microscopy (LSFM) is a powerful optical resolution fluorescence microscopy technique which enables to observe the mouse brain vascular network in cellular resolution. However, micro-vessel structures are intensity inhomogeneity in LSFM images, which make an inconvenience for extracting line structures. In this work, we developed a vascular image segmentation method by enhancing vessel details which should be useful for estimating statistics like micro-vessel density. Since the eigenvalues of hessian matrix and its sign describes different geometric structure in images, which enable to construct vascular similarity function and enhance line signals, the main idea of our method is to cluster the pixel values of the enhanced image. Our method contained three steps: 1) calculate the multiscale gradients and the differences between eigenvalues of Hessian matrix. 2) In order to generate the enhanced microvessels structures, a feed forward neural network was trained by 2.26 million pixels for dealing with the correlations between multi-scale gradients and the differences between eigenvalues. 3) The fuzzy local information c-means clustering (FLICM) was used to cluster the pixel values in enhance line signals. To verify the feasibility and effectiveness of this method, mouse brain vascular images have been acquired by a commercial light-sheet microscope in our lab. The experiment of the segmentation method showed that dice similarity coefficient can reach up to 85%. The results illustrated that our approach extracting line structures of blood vessels dramatically improves the vascular image and enable to accurately extract blood vessels in LSFM images.
High Dynamic Range Pixel Array Detector for Scanning Transmission Electron Microscopy.
Tate, Mark W; Purohit, Prafull; Chamberlain, Darol; Nguyen, Kayla X; Hovden, Robert; Chang, Celesta S; Deb, Pratiti; Turgut, Emrah; Heron, John T; Schlom, Darrell G; Ralph, Daniel C; Fuchs, Gregory D; Shanks, Katherine S; Philipp, Hugh T; Muller, David A; Gruner, Sol M
2016-02-01
We describe a hybrid pixel array detector (electron microscope pixel array detector, or EMPAD) adapted for use in electron microscope applications, especially as a universal detector for scanning transmission electron microscopy. The 128×128 pixel detector consists of a 500 µm thick silicon diode array bump-bonded pixel-by-pixel to an application-specific integrated circuit. The in-pixel circuitry provides a 1,000,000:1 dynamic range within a single frame, allowing the direct electron beam to be imaged while still maintaining single electron sensitivity. A 1.1 kHz framing rate enables rapid data collection and minimizes sample drift distortions while scanning. By capturing the entire unsaturated diffraction pattern in scanning mode, one can simultaneously capture bright field, dark field, and phase contrast information, as well as being able to analyze the full scattering distribution, allowing true center of mass imaging. The scattering is recorded on an absolute scale, so that information such as local sample thickness can be directly determined. This paper describes the detector architecture, data acquisition system, and preliminary results from experiments with 80-200 keV electron beams.
Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy
NASA Astrophysics Data System (ADS)
Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan
2016-03-01
Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.
A fast and efficient segmentation scheme for cell microscopic image.
Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H
2007-04-27
Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.
Dual-mode optical microscope based on single-pixel imaging
NASA Astrophysics Data System (ADS)
Rodríguez, A. D.; Clemente, P.; Tajahuerce, E.; Lancis, J.
2016-07-01
We demonstrate an inverted microscope that can image specimens in both reflection and transmission modes simultaneously with a single light source. The microscope utilizes a digital micromirror device (DMD) for patterned illumination altogether with two single-pixel photosensors for efficient light detection. The system, a scan-less device with no moving parts, works by sequential projection of a set of binary intensity patterns onto the sample that are codified onto a modified commercial DMD. Data to be displayed are geometrically transformed before written into a memory cell to cancel optical artifacts coming from the diamond-like shaped structure of the micromirror array. The 24-bit color depth of the display is fully exploited to increase the frame rate by a factor of 24, which makes the technique practicable for real samples. Our commercial DMD-based LED-illumination is cost effective and can be easily coupled as an add-on module for already existing inverted microscopes. The reflection and transmission information provided by our dual microscope complement each other and can be useful for imaging non-uniform samples and to prevent self-shadowing effects.
Bishara, Waheb; Sikora, Uzair; Mudanyali, Onur; Su, Ting-Wei; Yaglidere, Oguzhan; Luckhart, Shirley; Ozcan, Aydogan
2011-04-07
We report a portable lensless on-chip microscope that can achieve <1 µm resolution over a wide field-of-view of ∼ 24 mm(2) without the use of any mechanical scanning. This compact on-chip microscope weighs ∼ 95 g and is based on partially coherent digital in-line holography. Multiple fiber-optic waveguides are butt-coupled to light emitting diodes, which are controlled by a low-cost micro-controller to sequentially illuminate the sample. The resulting lensfree holograms are then captured by a digital sensor-array and are rapidly processed using a pixel super-resolution algorithm to generate much higher resolution holographic images (both phase and amplitude) of the objects. This wide-field and high-resolution on-chip microscope, being compact and light-weight, would be important for global health problems such as diagnosis of infectious diseases in remote locations. Toward this end, we validate the performance of this field-portable microscope by imaging human malaria parasites (Plasmodium falciparum) in thin blood smears. Our results constitute the first-time that a lensfree on-chip microscope has successfully imaged malaria parasites.
Spatially resolved D-T(2) correlation NMR of porous media.
Zhang, Yan; Blümich, Bernhard
2014-05-01
Within the past decade, 2D Laplace nuclear magnetic resonance (NMR) has been developed to analyze pore geometry and diffusion of fluids in porous media on the micrometer scale. Many objects like rocks and concrete are heterogeneous on the macroscopic scale, and an integral analysis of microscopic properties provides volume-averaged information. Magnetic resonance imaging (MRI) resolves this spatial average on the contrast scale set by the particular MRI technique. Desirable contrast parameters for studies of fluid transport in porous media derive from the pore-size distribution and the pore connectivity. These microscopic parameters are accessed by 1D and 2D Laplace NMR techniques. It is therefore desirable to combine MRI and 2D Laplace NMR to image functional information on fluid transport in porous media. Because 2D Laplace resolved MRI demands excessive measuring time, this study investigates the possibility to restrict the 2D Laplace analysis to the sum signals from low-resolution pixels, which correspond to pixels of similar amplitude in high-resolution images. In this exploratory study spatially resolved D-T2 correlation maps from glass beads and mortar are analyzed. Regions of similar contrast are first identified in high-resolution images to locate corresponding pixels in low-resolution images generated with D-T2 resolved MRI for subsequent pixel summation to improve the signal-to-noise ratio of contrast-specific D-T2 maps. This method is expected to contribute valuable information on correlated sample heterogeneity from the macroscopic and the microscopic scales in various types of porous materials including building materials and rock. Copyright © 2014 Elsevier Inc. All rights reserved.
Thrombus segmentation by texture dynamics from microscopic image sequences
NASA Astrophysics Data System (ADS)
Brieu, Nicolas; Serbanovic-Canic, Jovana; Cvejic, Ana; Stemple, Derek; Ouwehand, Willem; Navab, Nassir; Groher, Martin
2010-03-01
The genetic factors of thrombosis are commonly explored by microscopically imaging the coagulation of blood cells induced by injuring a vessel of mice or of zebrafish mutants. The latter species is particularly interesting since skin transparency permits to non-invasively acquire microscopic images of the scene with a CCD camera and to estimate the parameters characterizing the thrombus development. These parameters are currently determined by manual outlining, which is both error prone and extremely time consuming. Even though a technique for automatic thrombus extraction would be highly valuable for gene analysts, little work can be found, which is mainly due to very low image contrast and spurious structures. In this work, we propose to semi-automatically segment the thrombus over time from microscopic image sequences of wild-type zebrafish larvae. To compensate the lack of valuable spatial information, our main idea consists of exploiting the temporal information by modeling the variations of the pixel intensities over successive temporal windows with a linear Markov-based dynamic texture formalization. We then derive an image from the estimated model parameters, which represents the probability of a pixel to belong to the thrombus. We employ this probability image to accurately estimate the thrombus position via an active contour segmentation incorporating also prior and spatial information of the underlying intensity images. The performance of our approach is tested on three microscopic image sequences. We show that the thrombus is accurately tracked over time in each sequence if the respective parameters controlling prior influence and contour stiffness are correctly chosen.
CHAMP (Camera, Handlens, and Microscope Probe)
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.
Single-photon counting multicolor multiphoton fluorescence microscope.
Buehler, Christof; Kim, Ki H; Greuter, Urs; Schlumpf, Nick; So, Peter T C
2005-01-01
We present a multicolor multiphoton fluorescence microscope with single-photon counting sensitivity. The system integrates a standard multiphoton fluorescence microscope, an optical grating spectrograph operating in the UV-Vis wavelength region, and a 16-anode photomultiplier tube (PMT). The major technical innovation is in the development of a multichannel photon counting card (mC-PhCC) for direct signal collection from multi-anode PMTs. The electronic design of the mC-PhCC employs a high-throughput, fully-parallel, single-photon counting scheme along with a high-speed electrical or fiber-optical link interface to the data acquisition computer. There is no electronic crosstalk among the detection channels of the mC-PhCC. The collected signal remains linear up to an incident photon rate of 10(8) counts per second. The high-speed data interface offers ample bandwidth for real-time readout: 2 MByte lambda-stacks composed of 16 spectral channels, 256 x 256 pixel image with 12-bit dynamic range can be transferred at 30 frames per second. The modular design of the mC-PhCC can be readily extended to accommodate PMTs of more anodes. Data acquisition from a 64-anode PMT has been verified. As a demonstration of system performance, spectrally resolved images of fluorescent latex spheres and ex-vivo human skin are reported. The multicolor multiphoton microscope is suitable for highly sensitive, real-time, spectrally-resolved three-dimensional imaging in biomedical applications.
Overview of Athena Microscopic Imager Results
NASA Technical Reports Server (NTRS)
Herkenhoff, K.; Squyres, S.; Arvidson, R.; Bass, D.; Bell, J., III; Bertelsen, P.; Cabrol, N.; Ehlmann, B.; Farrand, W.; Gaddis, L.
2005-01-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on an extendable arm, the Instrument Deployment Device (IDD). The MI acquires images at a spatial resolution of 31 microns/pixel over a broad spectral range (400 - 700 nm). The MI uses the same electronics design as the other MER cameras but its optics yield a field of view of 32 32 mm across a 1024 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. The MI science objectives, instrument design and calibration, operation, and data processing were described by Herkenhoff et al. Initial results of the MI experiment on both MER rovers (Spirit and Opportunity) have been published previously. Highlights of these and more recent results are described.
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
Curiosity's Mars Hand Lens Imager (MAHLI): Inital Observations and Activities
NASA Technical Reports Server (NTRS)
Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Robinson, M. L.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Bean, K. M.; Beegle, L. W.;
2013-01-01
MAHLI (Mars Hand Lens Imager) is a 2-megapixel focusable macro lens color camera on the turret on Curiosity's robotic arm. The investigation centers on stratigraphy, grain-scale texture, structure, mineralogy, and morphology of geologic materials at Curiosity's Gale robotic field site. MAHLI acquires focused images at working distances of 2.1 cm to infinity; for reference, at 2.1 cm the scale is 14 microns/pixel; at 6.9 cm it is 31 microns/pixel, like the Spirit and Opportunity Microscopic Imager (MI) cameras.
Multi-frame partially saturated images blind deconvolution
NASA Astrophysics Data System (ADS)
Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2016-12-01
When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.
Modular Scanning Confocal Microscope with Digital Image Processing.
Ye, Xianjun; McCluskey, Matthew D
2016-01-01
In conventional confocal microscopy, a physical pinhole is placed at the image plane prior to the detector to limit the observation volume. In this work, we present a modular design of a scanning confocal microscope which uses a CCD camera to replace the physical pinhole for materials science applications. Experimental scans were performed on a microscope resolution target, a semiconductor chip carrier, and a piece of etched silicon wafer. The data collected by the CCD were processed to yield images of the specimen. By selecting effective pixels in the recorded CCD images, a virtual pinhole is created. By analyzing the image moments of the imaging data, a lateral resolution enhancement is achieved by using a 20 × / NA = 0.4 microscope objective at 532 nm laser wavelength.
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Gursel, Yekta; Sepulveda, Cesar A.; Anderson, Mark; La Baw, Clayton; Johnson, Kenneth R.; Deans, Matthew; Beegle, Luther; Boynton, John
2008-01-01
Conducting high resolution field microscopy with coupled laser spectroscopy that can be used to selectively analyze the surface chemistry of individual pixels in a scene is an enabling capability for next generation robotic and manned spaceflight missions, civil, and military applications. In the laboratory, we use a range of imaging and surface preparation tools that provide us with in-focus images, context imaging for identifying features that we want to investigate at high magnification, and surface-optical coupling that allows us to apply optical spectroscopic analysis techniques for analyzing surface chemistry particularly at high magnifications. The camera, hand lens, and microscope probe with scannable laser spectroscopy (CHAMP-SLS) is an imaging/spectroscopy instrument capable of imaging continuously from infinity down to high resolution microscopy (resolution of approx. 1 micron/pixel in a final camera format), the closer CHAMP-SLS is placed to a feature, the higher the resultant magnification. At hand lens to microscopic magnifications, the imaged scene can be selectively interrogated with point spectroscopic techniques such as Raman spectroscopy, microscopic Laser Induced Breakdown Spectroscopy (micro-LIBS), laser ablation mass-spectrometry, Fluorescence spectroscopy, and/or Reflectance spectroscopy. This paper summarizes the optical design, development, and testing of the CHAMP-SLS optics.
The Physics of Imaging with Remote Sensors : Photon State Space & Radiative Transfer
NASA Technical Reports Server (NTRS)
Davis, Anthony B.
2012-01-01
Standard (mono-pixel/steady-source) retrieval methodology is reaching its fundamental limit with access to multi-angle/multi-spectral photo- polarimetry. Next... Two emerging new classes of retrieval algorithm worth nurturing: multi-pixel time-domain Wave-radiometry transition regimes, and more... Cross-fertilization with bio-medical imaging. Physics-based remote sensing: - What is "photon state space?" - What is "radiative transfer?" - Is "the end" in sight? Two wide-open frontiers! center dot Examples (with variations.
NASA Astrophysics Data System (ADS)
Tyliszczak, T.; Hitchcock, P.; Kilcoyne, A. L. D.; Ade, H.; Hitchcock, A. P.; Fakra, S.; Steele, W. F.; Warwick, T.
2002-03-01
Two new scanning x-ray transmission microscopes are being built at beamline 5.3.2 and beamline 7.0 of the Advanced Light Source that have novel aspects in their control and acquisition systems. Both microscopes use multiaxis laser interferometry to improve the precision of pixel location during imaging and energy scans as well as to remove image distortions. Beam line 5.3.2 is a new beam line where the new microscope will be dedicated to studies of polymers in the 250-600 eV energy range. Since this is a bending magnet beam line with lower x-ray brightness than undulator beam lines, special attention is given to the design not only to minimize distortions and vibrations but also to optimize the controls and acquisition to improve data collection efficiency. 5.3.2 microscope control and acquisition is based on a PC computer running WINDOWS 2000. All mechanical stages are moved by stepper motors with rack mounted controllers. A dedicated counter board is used for counting and timing and a multi-input/output board is used for analog acquisition and control of the focusing mirror. A three axis differential laser interferometer is being used to improve stability and precision by careful tracking of the relative positions of the sample and zone plate. Each axis measures the relative distance between a mirror placed on the sample stage and a mirror attached to the zone plate holder. Agilent Technologies HP 10889A servo-axis interferometer boards are used. While they were designed to control servo motors, our tests show that they can be used to directly control the piezo stage. The use of the interferometer servo-axis boards provides excellent point stability for spectral measurements. The interferometric feedback also provides active vibration isolation which reduces deleterious impact of mechanical vibrations up to 20-30 Hz. It also can improve the speed and precision of image scans. Custom C++ software has been written to provide user friendly control of the microscope and integration with visual light microscopy indexing of the samples. The beam line 7.0 microscope upgrade is a new design which will replace the existing microscope. The design is similar to that of beam line 5.3.2, including interferometric position encoding. However the acquisition and control is based on VXI systems, a Sun computer, and LABVIEW™ software. The main objective of the BL 7.0 microscope upgrade is to achieve precise image scans at very high speed (pixel dwells as short as 10 μs) to take full advantage of the high brightness of the 7.0 undulator beamline. Results of tests and a discussion of the benefits of our scanning microscope designs will be presented.
Evidence from Opportunity's Microscopic Imager for water on Meridiani Planum.
Herkenhoff, K E; Squyres, S W; Arvidson, R; Bass, D S; Bell, J F; Bertelsen, P; Ehlmann, B L; Farrand, W; Gaddis, L; Greeley, R; Grotzinger, J; Hayes, A G; Hviid, S F; Johnson, J R; Jolliff, B; Kinch, K M; Knoll, A H; Madsen, M B; Maki, J N; McLennan, S M; McSween, H Y; Ming, D W; Rice, J W; Richter, L; Sims, M; Smith, P H; Soderblom, L A; Spanovich, N; Sullivan, R; Thompson, S; Wdowiak, T; Weitz, C; Whelley, P
2004-12-03
The Microscopic Imager on the Opportunity rover analyzed textures of soils and rocks at Meridiani Planum at a scale of 31 micrometers per pixel. The uppermost millimeter of some soils is weakly cemented, whereas other soils show little evidence of cohesion. Rock outcrops are laminated on a millimeter scale; image mosaics of cross-stratification suggest that some sediments were deposited by flowing water. Vugs in some outcrop faces are probably molds formed by dissolution of relatively soluble minerals during diagenesis. Microscopic images support the hypothesis that hematite-rich spherules observed in outcrops and soils also formed diagenetically as concretions.
NASA Technical Reports Server (NTRS)
Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)
2002-01-01
A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.
RAMTaB: Robust Alignment of Multi-Tag Bioimages
Raza, Shan-e-Ahmed; Humayun, Ahmad; Abouna, Sylvie; Nattkemper, Tim W.; Epstein, David B. A.; Khan, Michael; Rajpoot, Nasir M.
2012-01-01
Background In recent years, new microscopic imaging techniques have evolved to allow us to visualize several different proteins (or other biomolecules) in a visual field. Analysis of protein co-localization becomes viable because molecules can interact only when they are located close to each other. We present a novel approach to align images in a multi-tag fluorescence image stack. The proposed approach is applicable to multi-tag bioimaging systems which (a) acquire fluorescence images by sequential staining and (b) simultaneously capture a phase contrast image corresponding to each of the fluorescence images. To the best of our knowledge, there is no existing method in the literature, which addresses simultaneous registration of multi-tag bioimages and selection of the reference image in order to maximize the overall overlap between the images. Methodology/Principal Findings We employ a block-based method for registration, which yields a confidence measure to indicate the accuracy of our registration results. We derive a shift metric in order to select the Reference Image with Maximal Overlap (RIMO), in turn minimizing the total amount of non-overlapping signal for a given number of tags. Experimental results show that the Robust Alignment of Multi-Tag Bioimages (RAMTaB) framework is robust to variations in contrast and illumination, yields sub-pixel accuracy, and successfully selects the reference image resulting in maximum overlap. The registration results are also shown to significantly improve any follow-up protein co-localization studies. Conclusions For the discovery of protein complexes and of functional protein networks within a cell, alignment of the tag images in a multi-tag fluorescence image stack is a key pre-processing step. The proposed framework is shown to produce accurate alignment results on both real and synthetic data. Our future work will use the aligned multi-channel fluorescence image data for normal and diseased tissue specimens to analyze molecular co-expression patterns and functional protein networks. PMID:22363510
Modular Scanning Confocal Microscope with Digital Image Processing
McCluskey, Matthew D.
2016-01-01
In conventional confocal microscopy, a physical pinhole is placed at the image plane prior to the detector to limit the observation volume. In this work, we present a modular design of a scanning confocal microscope which uses a CCD camera to replace the physical pinhole for materials science applications. Experimental scans were performed on a microscope resolution target, a semiconductor chip carrier, and a piece of etched silicon wafer. The data collected by the CCD were processed to yield images of the specimen. By selecting effective pixels in the recorded CCD images, a virtual pinhole is created. By analyzing the image moments of the imaging data, a lateral resolution enhancement is achieved by using a 20 × / NA = 0.4 microscope objective at 532 nm laser wavelength. PMID:27829052
Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan
2017-01-01
In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866
Li, Yongxiao; Montague, Samantha J; Brüstle, Anne; He, Xuefei; Gillespie, Cathy; Gaus, Katharina; Gardiner, Elizabeth E; Lee, Woei Ming
2018-02-28
In this study, we introduce two key improvements that overcome limitations of existing polygon scanning microscopes while maintaining high spatial and temporal imaging resolution over large field of view (FOV). First, we proposed a simple and straightforward means to control the scanning angle of the polygon mirror to carry out photomanipulation without resorting to high speed optical modulators. Second, we devised a flexible data sampling method directly leading to higher image contrast by over 2-fold and digital images with 100 megapixels (10 240 × 10 240) per frame at 0.25 Hz. This generates sub-diffraction limited pixels (60 nm per pixels over the FOV of 512 μm) which increases the degrees of freedom to extract signals computationally. The unique combined optical and digital control recorded fine fluorescence recovery after localized photobleaching (r ~10 μm) within fluorescent giant unilamellar vesicles and micro-vascular dynamics after laser-induced injury during thrombus formation in vivo. These new improvements expand the quantitative biological-imaging capacity of any polygon scanning microscope system. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects
NASA Astrophysics Data System (ADS)
Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.
2013-06-01
High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system "UPMC Cam," to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system.
Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects
Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.
2013-01-01
High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system “UPMC Cam,” to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system. PMID:23822346
3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy
Zhang, Yibo; Shin, Yoonjung; Sung, Kevin; Yang, Sam; Chen, Harrison; Wang, Hongda; Teng, Da; Rivenson, Yair; Kulkarni, Rajan P.; Ozcan, Aydogan
2017-01-01
High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3′-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings. PMID:28819645
3D on-chip microscopy of optically cleared tissue
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Shin, Yoonjung; Sung, Kevin; Yang, Sam; Chen, Harrison; Wang, Hongda; Teng, Da; Rivenson, Yair; Kulkarni, Rajan P.; Ozcan, Aydogan
2018-02-01
Traditional pathology relies on tissue biopsy, micro-sectioning, immunohistochemistry and microscopic imaging, which are relatively expensive and labor-intensive, and therefore are less accessible in resource-limited areas. Low-cost tissue clearing techniques, such as the simplified CLARITY method (SCM), are promising to potentially reduce the cost of disease diagnosis by providing 3D imaging and phenotyping of thicker tissue samples with simpler preparation steps. However, the mainstream imaging approach for cleared tissue, fluorescence microscopy, suffers from high-cost, photobleaching and signal fading. As an alternative approach to fluorescence, here we demonstrate 3D imaging of SCMcleared tissue using on-chip holography, which is based on pixel-super-resolution and multi-height phase recovery algorithms to digitally compute the sample's amplitude and phase images at various z-slices/depths through the sample. The tissue clearing procedures and the lens-free imaging system were jointly optimized to find the best illumination wavelength, tissue thickness, staining solution pH, and the number of hologram heights to maximize the imaged tissue volume, minimize the amount of acquired data, while maintaining a high contrast-to-noise ratio for the imaged cells. After this optimization, we achieved 3D imaging of a 200-μm thick cleared mouse brain tissue over a field-of-view of <20mm2 , and the resulting 3D z-stack agrees well with the images acquired with a scanning lens-based microscope (20× 0.75NA). Moreover, the lens-free microscope achieves an order-of-magnitude better data efficiency compared to its lens-based counterparts for volumetric imaging of samples. The presented low-cost and high-throughput lens-free tissue imaging technique enabled by CLARITY can be used in various biomedical applications in low-resource-settings.
Elliott, Amicia D.; Gao, Liang; Ustione, Alessandro; Bedard, Noah; Kester, Robert; Piston, David W.; Tkaczyk, Tomasz S.
2012-01-01
Summary The development of multi-colored fluorescent proteins, nanocrystals and organic fluorophores, along with the resulting engineered biosensors, has revolutionized the study of protein localization and dynamics in living cells. Hyperspectral imaging has proven to be a useful approach for such studies, but this technique is often limited by low signal and insufficient temporal resolution. Here, we present an implementation of a snapshot hyperspectral imaging device, the image mapping spectrometer (IMS), which acquires full spectral information simultaneously from each pixel in the field without scanning. The IMS is capable of real-time signal capture from multiple fluorophores with high collection efficiency (∼65%) and image acquisition rate (up to 7.2 fps). To demonstrate the capabilities of the IMS in cellular applications, we have combined fluorescent protein (FP)-FRET and [Ca2+]i biosensors to measure simultaneously intracellular cAMP and [Ca2+]i signaling in pancreatic β-cells. Additionally, we have compared quantitatively the IMS detection efficiency with a laser-scanning confocal microscope. PMID:22854044
CHAMP - Camera, Handlens, and Microscope Probe
NASA Technical Reports Server (NTRS)
Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.
Terahertz Array Receivers with Integrated Antennas
NASA Technical Reports Server (NTRS)
Chattopadhyay, Goutam; Llombart, Nuria; Lee, Choonsup; Jung, Cecile; Lin, Robert; Cooper, Ken B.; Reck, Theodore; Siles, Jose; Schlecht, Erich; Peralta, Alessandro;
2011-01-01
Highly sensitive terahertz heterodyne receivers have been mostly single-pixel. However, now there is a real need of multi-pixel array receivers at these frequencies driven by the science and instrument requirements. In this paper we explore various receiver font-end and antenna architectures for use in multi-pixel integrated arrays at terahertz frequencies. Development of wafer-level integrated terahertz receiver front-end by using advanced semiconductor fabrication technologies has progressed very well over the past few years. Novel stacking of micro-machined silicon wafers which allows for the 3-dimensional integration of various terahertz receiver components in extremely small packages has made it possible to design multi-pixel heterodyne arrays. One of the critical technologies to achieve fully integrated system is the antenna arrays compatible with the receiver array architecture. In this paper we explore different receiver and antenna architectures for multi-pixel heterodyne and direct detector arrays for various applications such as multi-pixel high resolution spectrometer and imaging radar at terahertz frequencies.
NASA Astrophysics Data System (ADS)
Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Adler, Dorit D.; Blane, Caroline E.; Joynt, Lynn K.; Paramagul, Chintana; Roubidoux, Marilyn A.; Wilson, Todd E.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
1999-05-01
A receiver operating characteristic (ROC) experiment was conducted to evaluate the effects of pixel size on the characterization of mammographic microcalcifications. Digital mammograms were obtained by digitizing screen-film mammograms with a laser film scanner. One hundred twelve two-view mammograms with biopsy-proven microcalcifications were digitized at a pixel size of 35 micrometer X 35 micrometer. A region of interest (ROI) containing the microcalcifications was extracted from each image. ROI images with pixel sizes of 70 micrometers, 105 micrometers, and 140 micrometers were derived from the ROI of 35 micrometer pixel size by averaging 2 X 2, 3 X 3, and 4 X 4 neighboring pixels, respectively. The ROI images were printed on film with a laser imager. Seven MQSA-approved radiologists participated as observers. The likelihood of malignancy of the microcalcifications was rated on a 10-point confidence rating scale and analyzed with ROC methodology. The classification accuracy was quantified by the area, Az, under the ROC curve. The statistical significance of the differences in the Az values for different pixel sizes was estimated with the Dorfman-Berbaum-Metz (DBM) method for multi-reader, multi-case ROC data. It was found that five of the seven radiologists demonstrated a higher classification accuracy with the 70 micrometer or 105 micrometer images. The average Az also showed a higher classification accuracy in the range of 70 to 105 micrometer pixel size. However, the differences in A(subscript z/ between different pixel sizes did not achieve statistical significance. The low specificity of image features of microcalcifications an the large interobserver and intraobserver variabilities may have contributed to the relatively weak dependence of classification accuracy on pixel size.
Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization
NASA Astrophysics Data System (ADS)
Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li
2018-04-01
Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.
Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki
2014-01-01
The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.
Wave analysis of a plenoptic system and its applications
NASA Astrophysics Data System (ADS)
Shroff, Sapna A.; Berkner, Kathrin
2013-03-01
Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia
2018-05-01
The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.
NASA Astrophysics Data System (ADS)
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
Fast processing of microscopic images using object-based extended depth of field.
Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades
2016-12-22
Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.
NASA Astrophysics Data System (ADS)
McMackin, Lenore; Herman, Matthew A.; Weston, Tyler
2016-02-01
We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.
Demosaiced pixel super-resolution for multiplexed holographic color imaging
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2016-01-01
To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242
Frequency division multiplexed multi-color fluorescence microscope system
NASA Astrophysics Data System (ADS)
Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan
2017-10-01
Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.
Spatial-spectral blood cell classification with microscopic hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng
2017-10-01
Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.
Polarimetric analysis of a CdZnTe spectro-imager under multi-pixel irradiation conditions
NASA Astrophysics Data System (ADS)
Pinto, M.; da Silva, R. M. Curado; Maia, J. M.; Simões, N.; Marques, J.; Pereira, L.; Trindade, A. M. F.; Caroli, E.; Auricchio, N.; Stephen, J. B.; Gonçalves, P.
2016-12-01
So far, polarimetry in high-energy astrophysics has been insufficiently explored due to the complexity of the required detection, electronic and signal processing systems. However, its importance is today largely recognized by the astrophysical community, therefore the next generation of high-energy space instruments will certainly provide polarimetric observations, contemporaneously with spectroscopy and imaging. We have been participating in high-energy observatory proposals submitted to ESA Cosmic Vision calls, such as GRI (Gamma-Ray Imager), DUAL and ASTROGAM, where the main instrument was a spectro-imager with polarimetric capabilities. More recently, the H2020 AHEAD project was launched with the objective to promote more coherent and mature future high-energy space mission proposals. In this context of high-energy proposal development, we have tested a CdZnTe detection plane prototype polarimeter under a partially polarized gamma-ray beam generated from an aluminum target irradiated by a 22Na (511 keV) radioactive source. The polarized beam cross section was 1 cm2, allowing the irradiation of a wide multi-pixelated area where all the pixels operate simultaneously as a scatterer and as an absorber. The methods implemented to analyze such multi-pixel irradiation are similar to those required to analyze a spectro-imager polarimeter operating in space, since celestial source photons should irradiate its full pixilated area. Correction methods to mitigate systematic errors inherent to CdZnTe and to the experimental conditions were also implemented. The polarization level ( 40%) and the polarization angle (precision of ±5° up to ±9°) obtained under multi-pixel irradiation conditions are presented and compared with simulated data.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.
Fu, C.Y.; Petrich, L.I.
1997-03-25
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.
Microscopy with multimode fibers
NASA Astrophysics Data System (ADS)
Moser, Christophe; Papadopoulos, Ioannis; Farahi, Salma; Psaltis, Demetri
2013-04-01
Microscopes are usually thought of comprising imaging elements such as objectives and eye-piece lenses. A different type of microscope, used for endoscopy, consists of waveguiding elements such as fiber bundles, where each fiber in the bundle transports the light corresponding to one pixel in the image. Recently a new type of microscope has emerged that exploits the large number of propagating modes in a single multimode fiber. We have successfully produced fluorescence images of neural cells with sub-micrometer resolution via a 200 micrometer core multimode fiber. The method for achieving imaging consists of using digital phase conjugation to reproduce a focal spot at the tip of the multimode fiber. The image is formed by scanning the focal spot digitally and collecting the fluorescence point by point.
SD-SEM: sparse-dense correspondence for 3D reconstruction of microscopic samples.
Baghaie, Ahmadreza; Tafti, Ahmad P; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun
2017-06-01
Scanning electron microscopy (SEM) imaging has been a principal component of many studies in biomedical, mechanical, and materials sciences since its emergence. Despite the high resolution of captured images, they remain two-dimensional (2D). In this work, a novel framework using sparse-dense correspondence is introduced and investigated for 3D reconstruction of stereo SEM images. SEM micrographs from microscopic samples are captured by tilting the specimen stage by a known angle. The pair of SEM micrographs is then rectified using sparse scale invariant feature transform (SIFT) features/descriptors and a contrario RANSAC for matching outlier removal to ensure a gross horizontal displacement between corresponding points. This is followed by dense correspondence estimation using dense SIFT descriptors and employing a factor graph representation of the energy minimization functional and loopy belief propagation (LBP) as means of optimization. Given the pixel-by-pixel correspondence and the tilt angle of the specimen stage during the acquisition of micrographs, depth can be recovered. Extensive tests reveal the strength of the proposed method for high-quality reconstruction of microscopic samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sparse sampling and reconstruction for electron and scanning probe microscope imaging
Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.
2015-07-28
Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.
The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.
2017-02-01
Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.
A Multi-Modality CMOS Sensor Array for Cell-Based Assay and Drug Screening.
Chi, Taiyun; Park, Jong Seok; Butts, Jessica C; Hookway, Tracy A; Su, Amy; Zhu, Chengjie; Styczynski, Mark P; McDevitt, Todd C; Wang, Hua
2015-12-01
In this paper, we present a fully integrated multi-modality CMOS cellular sensor array with four sensing modalities to characterize different cell physiological responses, including extracellular voltage recording, cellular impedance mapping, optical detection with shadow imaging and bioluminescence sensing, and thermal monitoring. The sensor array consists of nine parallel pixel groups and nine corresponding signal conditioning blocks. Each pixel group comprises one temperature sensor and 16 tri-modality sensor pixels, while each tri-modality sensor pixel can be independently configured for extracellular voltage recording, cellular impedance measurement (voltage excitation/current sensing), and optical detection. This sensor array supports multi-modality cellular sensing at the pixel level, which enables holistic cell characterization and joint-modality physiological monitoring on the same cellular sample with a pixel resolution of 80 μm × 100 μm. Comprehensive biological experiments with different living cell samples demonstrate the functionality and benefit of the proposed multi-modality sensing in cell-based assay and drug screening.
NASA Technical Reports Server (NTRS)
Skakun, Sergii; Roger, Jean-Claude; Vermote, Eric F.; Masek, Jeffrey G.; Justice, Christopher O.
2017-01-01
This study investigates misregistration issues between Landsat-8/OLI and Sentinel-2A/MSI at 30 m resolution, and between multi-temporal Sentinel-2A images at 10 m resolution using a phase correlation approach and multiple transformation functions. Co-registration of 45 Landsat-8 to Sentinel-2A pairs and 37 Sentinel-2A to Sentinel-2A pairs were analyzed. Phase correlation proved to be a robust approach that allowed us to identify hundreds and thousands of control points on images acquired more than 100 days apart. Overall, misregistration of up to 1.6 pixels at 30 m resolution between Landsat-8 and Sentinel-2A images, and 1.2 pixels and 2.8 pixels at 10 m resolution between multi-temporal Sentinel-2A images from the same and different orbits, respectively, were observed. The non-linear Random Forest regression used for constructing the mapping function showed best results in terms of root mean square error (RMSE), yielding an average RMSE error of 0.07+/-0.02 pixels at 30 m resolution, and 0.09+/-0.05 and 0.15+/-0.06 pixels at 10 m resolution for the same and adjacent Sentinel-2A orbits, respectively, for multiple tiles and multiple conditions. A simpler 1st order polynomial function (affine transformation) yielded RMSE of 0.08+/-0.02 pixels at 30 m resolution and 0.12+/-0.06 (same Sentinel-2A orbits) and 0.20+/-0.09 (adjacent orbits) pixels at 10 m resolution.
NASA Astrophysics Data System (ADS)
Steinbach, G.; Pawlak, K.; Pomozi, I.; Tóth, E. A.; Molnár, A.; Matkó, J.; Garab, G.
2014-03-01
Elucidation of the molecular architecture of complex, highly organized molecular macro-assemblies is an important, basic task for biology. Differential polarization (DP) measurements, such as linear (LD) and circular dichroism (CD) or the anisotropy of the fluorescence emission (r), which can be carried out in a dichrograph or spectrofluorimeter, respectively, carry unique, spatially averaged information about the molecular organization of the sample. For inhomogeneous samples—e.g. cells and tissues—measurements on macroscopic scale are not satisfactory, and in some cases not feasible, thus microscopic techniques must be applied. The microscopic DP-imaging technique, when based on confocal laser scanning microscope (LSM), allows the pixel by pixel mapping of anisotropy of a sample in 2D and 3D. The first DP-LSM configuration, which, in fluorescence mode, allowed confocal imaging of different DP quantities in real-time, without interfering with the ‘conventional’ imaging, was built on a Zeiss LSM410. It was demonstrated to be capable of determining non-confocally the linear birefringence (LB) or LD of a sample and, confocally, its FDLD (fluorescence detected LD), the degree of polarization (P) and the anisotropy of the fluorescence emission (r), following polarized and non-polarized excitation, respectively (Steinbach et al 2009 Acta Histochem.111 316-25). This DP-LSM configuration, however, cannot simply be adopted to new generation microscopes with considerably more compact structures. As shown here, for an Olympus FV500, we designed an easy-to-install DP attachment to determine LB, LD, FDLD and r, in new-generation confocal microscopes, which, in principle, can be complemented with a P-imaging unit, but specifically to the brand and type of LSM.
A smartphone-based chip-scale microscope using ambient illumination.
Lee, Seung Ah; Yang, Changhuei
2014-08-21
Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone's camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the image resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction are performed on the device using a custom-built Android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.
A smartphone-based chip-scale microscope using ambient illumination
Lee, Seung Ah; Yang, Changhuei
2014-01-01
Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the imaging resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system. PMID:24964209
NASA Astrophysics Data System (ADS)
Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei
2018-06-01
Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.
Kiani, M A; Sim, K S; Nia, M E; Tso, C P
2015-05-01
A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
From Planetary Imaging to Enzyme Screening
NASA Technical Reports Server (NTRS)
2006-01-01
Based in San Diego, KAIROS Scientific develops molecular biology methods, instrumentation, and computer algorithms to create solutions to challenging problems in the medical and chemical industries. company s pioneering efforts in digital imaging spectroscopy (DIS) enable researchers to obtain spectral and/or time-dependent information for each pixel or group of pixels in a two-dimensional scene. In addition to having Yang s NASA experience at its foundation, KAIROS Scientific was established with the support of many government grants and contracts. Its first was a NASA Small Business Innovation Research (SBIR) grant, from Ames Research Center, to develop HIRIM, a high-resolution imaging microscope embodying both novel hardware and software that can be used to simultaneously acquire hundreds of individual absorbance spectra from microscopic features. Using HIRIM s graphical user interface, MicroDIS, scientists and engineers are presented with a revolutionary new tool which enables them to point to a feature in an image and recall its associated spectrum in real time.
Cluster secondary ion mass spectrometry microscope mode mass spectrometry imaging.
Kiss, András; Smith, Donald F; Jungmann, Julia H; Heeren, Ron M A
2013-12-30
Microscope mode imaging for secondary ion mass spectrometry is a technique with the promise of simultaneous high spatial resolution and high-speed imaging of biomolecules from complex surfaces. Technological developments such as new position-sensitive detectors, in combination with polyatomic primary ion sources, are required to exploit the full potential of microscope mode mass spectrometry imaging, i.e. to efficiently push the limits of ultra-high spatial resolution, sample throughput and sensitivity. In this work, a C60 primary source was combined with a commercial mass microscope for microscope mode secondary ion mass spectrometry imaging. The detector setup is a pixelated detector from the Medipix/Timepix family with high-voltage post-acceleration capabilities. The system's mass spectral and imaging performance is tested with various benchmark samples and thin tissue sections. The high secondary ion yield (with respect to 'traditional' monatomic primary ion sources) of the C60 primary ion source and the increased sensitivity of the high voltage detector setup improve microscope mode secondary ion mass spectrometry imaging. The analysis time and the signal-to-noise ratio are improved compared with other microscope mode imaging systems, all at high spatial resolution. We have demonstrated the unique capabilities of a C60 ion microscope with a Timepix detector for high spatial resolution microscope mode secondary ion mass spectrometry imaging. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Mir, J. A.; Plackett, R.; Shipsey, I.; dos Santos, J. M. F.
2017-11-01
Hybrid pixel sensor technology such as the Medipix3 represents a unique tool for electron imaging. We have investigated its performance as a direct imaging detector using a Transmission Electron Microscope (TEM) which incorporated a Medipix3 detector with a 300 μm thick silicon layer compromising of 256×256 pixels at 55 μm pixel pitch. We present results taken with the Medipix3 in Single Pixel Mode (SPM) with electron beam energies in the range, 60-200 keV . Measurements of the Modulation Transfer Function (MTF) and the Detective Quantum Efficiency (DQE) were investigated. At a given beam energy, the MTF data was acquired by deploying the established knife edge technique. Similarly, the experimental data required to determine DQE was obtained by acquiring a stack of images of a focused beam and of free space (flatfield) to determine the Noise Power Spectrum (NPS).
Field-portable lensfree tomographic microscope.
Isikman, Serhan O; Bishara, Waheb; Sikora, Uzair; Yaglidere, Oguzhan; Yeah, John; Ozcan, Aydogan
2011-07-07
We present a field-portable lensfree tomographic microscope, which can achieve sectional imaging of a large volume (∼20 mm(3)) on a chip with an axial resolution of <7 μm. In this compact tomographic imaging platform (weighing only ∼110 grams), 24 light-emitting diodes (LEDs) that are each butt-coupled to a fibre-optic waveguide are controlled through a cost-effective micro-processor to sequentially illuminate the sample from different angles to record lensfree holograms of the sample that is placed on the top of a digital sensor array. In order to generate pixel super-resolved (SR) lensfree holograms and hence digitally improve the achievable lateral resolution, multiple sub-pixel shifted holograms are recorded at each illumination angle by electromagnetically actuating the fibre-optic waveguides using compact coils and magnets. These SR projection holograms obtained over an angular range of ±50° are rapidly reconstructed to yield projection images of the sample, which can then be back-projected to compute tomograms of the objects on the sensor-chip. The performance of this compact and light-weight lensfree tomographic microscope is validated by imaging micro-beads of different dimensions as well as a Hymenolepis nana egg, which is an infectious parasitic flatworm. Achieving a decent three-dimensional spatial resolution, this field-portable on-chip optical tomographic microscope might provide a useful toolset for telemedicine and high-throughput imaging applications in resource-poor settings. This journal is © The Royal Society of Chemistry 2011
Note: A three-dimensional calibration device for the confocal microscope.
Jensen, K E; Weitz, D A; Spaepen, F
2013-01-01
Modern confocal microscopes enable high-precision measurement in three dimensions by collecting stacks of 2D (x-y) images that can be assembled digitally into a 3D image. It is difficult, however, to ensure position accuracy, particularly along the optical (z) axis where scanning is performed by a different physical mechanism than in x-y. We describe a simple device to calibrate simultaneously the x, y, and z pixel-to-micrometer conversion factors for a confocal microscope. By taking a known 2D pattern and positioning it at a precise angle with respect to the microscope axes, we created a 3D reference standard. The device is straightforward to construct and easy to use.
High-resolution confocal Raman microscopy using pixel reassignment.
Roider, Clemens; Ritsch-Marte, Monika; Jesacher, Alexander
2016-08-15
We present a practical modification of fiber-coupled confocal Raman scanning microscopes that is able to provide high confocal resolution in conjunction with high light collection efficiency. For this purpose, the single detection fiber is replaced by a hexagonal lenslet array in combination with a hexagonally packed round-to-linear multimode fiber bundle. A multiline detector is used to collect individual Raman spectra for each fiber. Data post-processing based on pixel reassignment allows one to improve the lateral resolution by up to 41% compared to a single fiber of equal light collection efficiency. We present results from an experimental implementation featuring seven collection fibers, yielding a resolution improvement of about 30%. We believe that our implementation represents an attractive upgrade for existing confocal Raman microscopes that employ multi-line detectors.
Correction of image drift and distortion in a scanning electron microscopy.
Jin, P; Li, X
2015-12-01
Continuous research on small-scale mechanical structures and systems has attracted strong demand for ultrafine deformation and strain measurements. Conventional optical microscope cannot meet such requirements owing to its lower spatial resolution. Therefore, high-resolution scanning electron microscope has become the preferred system for high spatial resolution imaging and measurements. However, scanning electron microscope usually is contaminated by distortion and drift aberrations which cause serious errors to precise imaging and measurements of tiny structures. This paper develops a new method to correct drift and distortion aberrations of scanning electron microscope images, and evaluates the effect of correction by comparing corrected images with scanning electron microscope image of a standard sample. The drift correction is based on the interpolation scheme, where a series of images are captured at one location of the sample and perform image correlation between the first image and the consequent images to interpolate the drift-time relationship of scanning electron microscope images. The distortion correction employs the axial symmetry model of charged particle imaging theory to two images sharing with the same location of one object under different imaging fields of view. The difference apart from rigid displacement between the mentioned two images will give distortion parameters. Three-order precision is considered in the model and experiment shows that one pixel maximum correction is obtained for the employed high-resolution electron microscopic system. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Highly Reflective Multi-stable Electrofluidic Display Pixels
NASA Astrophysics Data System (ADS)
Yang, Shu
Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
NASA Astrophysics Data System (ADS)
Quintavalla, M.; Pozzi, P.; Verhaegen, Michelle; Bijlsma, Hielke; Verstraete, Hans; Bonora, S.
2018-02-01
Adaptive Optics (AO) has revealed as a very promising technique for high-resolution microscopy, where the presence of optical aberrations can easily compromise the image quality. Typical AO systems however, are almost impossible to implement on commercial microscopes. We propose a simple approach by using a Multi-actuator Adaptive Lens (MAL) that can be inserted right after the objective and works in conjunction with an image optimization software allowing for a wavefront sensorless correction. We presented the results obtained on several commercial microscopes among which a confocal microscope, a fluorescence microscope, a light sheet microscope and a multiphoton microscope.
Evaluation of registration accuracy between Sentinel-2 and Landsat 8
NASA Astrophysics Data System (ADS)
Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia
2016-08-01
Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).
In-plane "superresolution" MRI with phaseless sub-pixel encoding.
Hennel, Franciszek; Tian, Rui; Engel, Maria; Pruessmann, Klaas P
2018-04-15
Acquisition of high-resolution imaging data using multiple excitations without the sensitivity to fluctuations of the transverse magnetization phase, which is a major problem of multi-shot MRI. The concept of superresolution MRI based on microscopic tagging is analyzed using an analogy with the optical method of structured illumination. Sinusoidal tagging is shown to provide subpixel resolution by mixing of neighboring spatial frequency (k-space) bands. It represents a phaseless modulation added on top of the standard Fourier encoding, which allows the phase fluctuations to be discarded at an intermediate reconstruction step. Improvements are proposed to correct for tag distortions due to magnetic field inhomogeneity and to avoid the propagation of Gibbs ringing from intermediate low-resolution images to the final image. The method was applied to diffusion-weighted EPI. Artifact-free superresolution images can be obtained despite a finite duration of the tagging sequence and related pattern distortions by a field map based phase correction of band-wise reconstructed images. The ringing effect present in the intermediate images can be suppressed by partial overlapping of the mixed k-space bands in combination with an adapted filter. High-resolution diffusion-weighted images of the human head were obtained with a three-shot EPI sequence despite motion-related phase fluctuations between the shots. Due to its phaseless character, tagging-based sub-pixel encoding is an alternative to k-space segmenting in the presence of unknown phase fluctuations, in particular those due to motion under strong diffusion gradients. Proposed improvements render the method practicable in realistic conditions. © 2018 International Society for Magnetic Resonance in Medicine.
Lew, Matthew D; von Diezmann, Alexander R S; Moerner, W E
2013-02-25
Automated processing of double-helix (DH) microscope images of single molecules (SMs) streamlines the protocol required to obtain super-resolved three-dimensional (3D) reconstructions of ultrastructures in biological samples by single-molecule active control microscopy. Here, we present a suite of MATLAB subroutines, bundled with an easy-to-use graphical user interface (GUI), that facilitates 3D localization of single emitters (e.g. SMs, fluorescent beads, or quantum dots) with precisions of tens of nanometers in multi-frame movies acquired using a wide-field DH epifluorescence microscope. The algorithmic approach is based upon template matching for SM recognition and least-squares fitting for 3D position measurement, both of which are computationally expedient and precise. Overlapping images of SMs are ignored, and the precision of least-squares fitting is not as high as maximum likelihood-based methods. However, once calibrated, the algorithm can fit 15-30 molecules per second on a 3 GHz Intel Core 2 Duo workstation, thereby producing a 3D super-resolution reconstruction of 100,000 molecules over a 20×20×2 μm field of view (processing 128×128 pixels × 20000 frames) in 75 min.
Development of a fast multi-line x-ray CT detector for NDT
NASA Astrophysics Data System (ADS)
Hofmann, T.; Nachtrab, F.; Schlechter, T.; Neubauer, H.; Mühlbauer, J.; Schröpfer, S.; Ernst, J.; Firsching, M.; Schweiger, T.; Oberst, M.; Meyer, A.; Uhlmann, N.
2015-04-01
Typical X-ray detectors for non-destructive testing (NDT) are line detectors or area detectors, like e.g. flat panel detectors. Multi-line detectors are currently only available in medical Computed Tomography (CT) scanners. Compared to flat panel detectors, line and multi-line detectors can achieve much higher frame rates. This allows time-resolved 3D CT scans of an object under investigation. Also, an improved image quality can be achieved due to reduced scattered radiation from object and detector themselves. Another benefit of line and multi-line detectors is that very wide detectors can be assembled easily, while flat panel detectors are usually limited to an imaging field with a size of approx. 40 × 40 cm2 at maximum. The big disadvantage of line detectors is the limited number of object slices that can be scanned simultaneously. This leads to long scan times for large objects. Volume scans with a multi-line detector are much faster, but with almost similar image quality. Due to the promising properties of multi-line detectors their application outside of medical CT would also be very interesting for NDT. However, medical CT multi-line detectors are optimized for the scanning of human bodies. Many non-medical applications require higher spatial resolutions and/or higher X-ray energies. For those non-medical applications we are developing a fast multi-line X-ray detector.In the scope of this work, we present the current state of the development of the novel detector, which includes several outstanding properties like an adjustable curved design for variable focus-detector-distances, conserving nearly uniform perpendicular irradiation over the entire detector width. Basis of the detector is a specifically designed, radiation hard CMOS imaging sensor with a pixel pitch of 200 μ m. Each pixel has an automatic in-pixel gain adjustment, which allows for both: a very high sensitivity and a wide dynamic range. The final detector is planned to have 256 lines of pixels. By using a modular assembly of the detector, the width can be chosen as multiples of 512 pixels. With a frame rate of up to 300 frames/s (full resolution) or 1200 frame/s (analog binning to 400 μ m pixel pitch) time-resolved 3D CT applications become possible. Two versions of the detector are in development, one with a high resolution scintillator and one with a thick, structured and very efficient scintillator (pitch 400 μ m). This way the detector can even work with X-ray energies up to 450 kVp.
Photon counting phosphorescence lifetime imaging with TimepixCam
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; ...
2017-01-12
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Photon counting phosphorescence lifetime imaging with TimepixCam.
Hirvonen, Liisa M; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
Photon counting phosphorescence lifetime imaging with TimepixCam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Photon counting phosphorescence lifetime imaging with TimepixCam
NASA Astrophysics Data System (ADS)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
Interactive classification and content-based retrieval of tissue images
NASA Astrophysics Data System (ADS)
Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof
2002-11-01
We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.
Visualization of Electrical Field of Electrode Using Voltage-Controlled Fluorescence Release
Jia, Wenyan; Wu, Jiamin; Gao, Di; Wang, Hao; Sun, Mingui
2016-01-01
In this study we propose an approach to directly visualize electrical current distribution at the electrode-electrolyte interface of a biopotential electrode. High-speed fluorescent microscopic images are acquired when an electric potential is applied across the interface to trigger the release of fluorescent material from the surface of the electrode. These images are analyzed computationally to obtain the distribution of the electric field from the fluorescent intensity of each pixel. Our approach allows direct observation of microscopic electrical current distribution around the electrode. Experiments are conducted to validate the feasibility of the fluorescent imaging method. PMID:27253615
Resolution enhancement of pump-probe microscope with an inverse-annular filter
NASA Astrophysics Data System (ADS)
Kobayashi, Takayoshi; Kawasumi, Koshi; Miyazaki, Jun; Nakata, Kazuaki
2018-04-01
Optical pump-probe microscopy can provide images by detecting changes in probe light intensity induced by stimulated emission, photoinduced absorbance change, or photothermal-induced refractive index change in either transmission or reflection mode. Photothermal microscopy, which is one type of optical pump-probe microscopy, has intrinsically super resolution capability due to the bilinear dependence of signal intensity of pump and probe. We introduce new techniques for further resolution enhancement and fast imaging in photothermal microscope. First, we introduce a new pupil filter, an inverse-annular pupil filter in a pump-probe photothermal microscope, which provides resolution enhancement in three dimensions. The resolutions are proved to be improved in lateral and axial directions by imaging experiment using 20-nm gold nanoparticles. The improvement in X (perpendicular to the common pump and probe polarization direction), Y (parallel to the polarization direction), and Z (axial direction) are by 15 ± 6, 8 ± 8, and 21 ± 2% from the resolution without a pupil filter. The resolution enhancement is even better than the calculation using vector field, which predicts the corresponding enhancement of 11, 8, and 6%. The discussion is made to explain the unexpected results. We also demonstrate the photothermal imaging of thick biological samples (cells from rabbit intestine and kidney) stained with hematoxylin and eosin dye with the inverse-annular filter. Second, a fast, high-sensitivity photothermal microscope is developed by implementing a spatially segmented balanced detection scheme into a laser scanning microscope using a Galvano mirror. We confirm a 4.9 times improvement in signal-to-noise ratio in the spatially segmented balanced detection compared with that of conventional detection. The system demonstrates simultaneous bi-modal photothermal and confocal fluorescence imaging of transgenic mouse brain tissue with a pixel dwell time of 20 µs. The fluorescence image visualizes neurons expressing yellow fluorescence proteins, while the photothermal signal detected endogenous chromophores in the mouse brain, allowing 3D visualization of the distribution of various features such as blood cells and fine structures most probably due to lipids. This imaging modality was constructed using compact and cost-effective laser diodes, and will thus be widely useful in the life and medical sciences. Third, we have made further resolution improvement of high-sensitivity laser scanning photothermal microscopy by applying non-linear detection. By this, the new method has super resolution with 61 and 42% enhancement from the diffraction limit values of the probe and pump wavelengths, respectively, by a second-order non-linear scheme and a high-frame rate in a laser scanning microscope. The maximum resolution is determined to be 160 nm in the second-order non-linear detection mode and 270 nm in the linear detection mode by the PT signal of GNPs. The pixel rate and frame rate for 300 × 300 pixel image are 50 µs and 4.5 s, respectively. The pixel and frame rate are shorter than the rates, those are 1 ms and 100 s, using the piezo-driven stage system.
Field-portable lensfree tomographic microscope†
Isikman, Serhan O.; Bishara, Waheb; Sikora, Uzair; Yaglidere, Oguzhan; Yeah, John; Ozcan, Aydogan
2011-01-01
We present a field-portable lensfree tomographic microscope, which can achieve sectional imaging of a large volume (~20 mm3) on a chip with an axial resolution of <7 μm. In this compact tomographic imaging platform (weighing only ~110 grams), 24 light-emitting diodes (LEDs) that are each butt-coupled to a fibre-optic waveguide are controlled through a cost-effective micro-processor to sequentially illuminate the sample from different angles to record lensfree holograms of the sample that is placed on the top of a digital sensor array. In order to generate pixel super-resolved (SR) lensfree holograms and hence digitally improve the achievable lateral resolution, multiple sub-pixel shifted holograms are recorded at each illumination angle by electromagnetically actuating the fibre-optic waveguides using compact coils and magnets. These SR projection holograms obtained over an angular range of ~50° are rapidly reconstructed to yield projection images of the sample, which can then be back-projected to compute tomograms of the objects on the sensor-chip. The performance of this compact and light-weight lensfree tomographic microscope is validated by imaging micro-beads of different dimensions as well as a Hymenolepis nana egg, which is an infectious parasitic flatworm. Achieving a decent three-dimensional spatial resolution, this field-portable on-chip optical tomographic microscope might provide a useful toolset for telemedicine and high-throughput imaging applications in resource-poor settings. PMID:21573311
Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.
Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili
2015-12-15
Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.
Investigation of skin structures based on infrared wave parameter indirect microscopic imaging
NASA Astrophysics Data System (ADS)
Zhao, Jun; Liu, Xuefeng; Xiong, Jichuan; Zhou, Lijuan
2017-02-01
Detailed imaging and analysis of skin structures are becoming increasingly important in modern healthcare and clinic diagnosis. Nanometer resolution imaging techniques such as SEM and AFM can cause harmful damage to the sample and cannot measure the whole skin structure from the very surface through epidermis, dermis to subcutaneous. Conventional optical microscopy has the highest imaging efficiency, flexibility in onsite applications and lowest cost in manufacturing and usage, but its image resolution is too low to be accepted for biomedical analysis. Infrared parameter indirect microscopic imaging (PIMI) uses an infrared laser as the light source due to its high transmission in skins. The polarization of optical wave through the skin sample was modulated while the variation of the optical field was observed at the imaging plane. The intensity variation curve of each pixel was fitted to extract the near field polarization parameters to form indirect images. During the through-skin light modulation and image retrieving process, the curve fitting removes the blurring scattering from neighboring pixels and keeps only the field variations related to local skin structures. By using the infrared PIMI, we can break the diffraction limit, bring the wide field optical image resolution to sub-200nm, in the meantime of taking advantage of high transmission of infrared waves in skin structures.
Optimal resolution in Fresnel incoherent correlation holographic fluorescence microscopy
Brooker, Gary; Siegel, Nisan; Wang, Victor; Rosen, Joseph
2011-01-01
Fresnel Incoherent Correlation Holography (FINCH) enables holograms and 3D images to be created from incoherent light with just a camera and spatial light modulator (SLM). We previously described its application to microscopic incoherent fluorescence wherein one complex hologram contains all the 3D information in the microscope field, obviating the need for scanning or serial sectioning. We now report experiments which have led to the optimal optical, electro-optic, and computational conditions necessary to produce holograms which yield high quality 3D images from fluorescent microscopic specimens. An important improvement from our previous FINCH configurations capitalizes on the polarization sensitivity of the SLM so that the same SLM pixels which create the spherical wave simulating the microscope tube lens, also pass the plane waves from the infinity corrected microscope objective, so that interference between the two wave types at the camera creates a hologram. This advance dramatically improves the resolution of the FINCH system. Results from imaging a fluorescent USAF pattern and a pollen grain slide reveal resolution which approaches the Rayleigh limit by this simple method for 3D fluorescent microscopic imaging. PMID:21445140
Opportunity Microscopic Imager Results from the Western Rim of Endeavour Crater
NASA Technical Reports Server (NTRS)
Herkenhoff, K. E.; Arvidson, R. E.; Mittlefehldt, D. W.; Sullivan, R. J.
2016-01-01
The Athena science payload on the Mars Exploration Rovers (MER Spirit and Opportunity) includes the Microscopic Imager (MI), a fixed focus close-up camera mounted on the instrument arm. The MI acquires images at a scale of 31 micrometers/pixel over a broad spectral range (400 to 700 nm) using only natural illumination of target surfaces. Radio signals from Spirit have not been received since March 2010, so attempts to communicate with that rover ceased in mid-2011. The Opportunity MI optics were contaminated by a global dust storm in 2007. That contamination continues to reduce the contrast of MI images, and is being monitored by occasionally imaging the sky.
Hattab, Georges; Schlüter, Jan-Philip; Becker, Anke; Nattkemper, Tim W.
2017-01-01
In order to understand gene function in bacterial life cycles, time lapse bioimaging is applied in combination with different marker protocols in so called microfluidics chambers (i.e., a multi-well plate). In one experiment, a series of T images is recorded for one visual field, with a pixel resolution of 60 nm/px. Any (semi-)automatic analysis of the data is hampered by a strong image noise, low contrast and, last but not least, considerable irregular shifts during the acquisition. Image registration corrects such shifts enabling next steps of the analysis (e.g., feature extraction or tracking). Image alignment faces two obstacles in this microscopic context: (a) highly dynamic structural changes in the sample (i.e., colony growth) and (b) an individual data set-specific sample environment which makes the application of landmarks-based alignments almost impossible. We present a computational image registration solution, we refer to as ViCAR: (Vi)sual (C)ues based (A)daptive (R)egistration, for such microfluidics experiments, consisting of (1) the detection of particular polygons (outlined and segmented ones, referred to as visual cues), (2) the adaptive retrieval of three coordinates throughout different sets of frames, and finally (3) an image registration based on the relation of these points correcting both rotation and translation. We tested ViCAR with different data sets and have found that it provides an effective spatial alignment thereby paving the way to extract temporal features pertinent to each resulting bacterial colony. By using ViCAR, we achieved an image registration with 99.9% of image closeness, based on the average rmsd of 4.10−2 pixels, and superior results compared to a state of the art algorithm. PMID:28620411
a Spiral-Based Downscaling Method for Generating 30 M Time Series Image Data
NASA Astrophysics Data System (ADS)
Liu, B.; Chen, J.; Xing, H.; Wu, H.; Zhang, J.
2017-09-01
The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland) make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.
Multi-class segmentation of neuronal electron microscopy images using deep learning
NASA Astrophysics Data System (ADS)
Khobragade, Nivedita; Agarwal, Chirag
2018-03-01
Study of connectivity of neural circuits is an essential step towards a better understanding of functioning of the nervous system. With the recent improvement in imaging techniques, high-resolution and high-volume images are being generated requiring automated segmentation techniques. We present a pixel-wise classification method based on Bayesian SegNet architecture. We carried out multi-class segmentation on serial section Transmission Electron Microscopy (ssTEM) images of Drosophila third instar larva ventral nerve cord, labeling the four classes of neuron membranes, neuron intracellular space, mitochondria and glia / extracellular space. Bayesian SegNet was trained using 256 ssTEM images of 256 x 256 pixels and tested on 64 different ssTEM images of the same size, from the same serial stack. Due to high class imbalance, we used a class-balanced version of Bayesian SegNet by re-weighting each class based on their relative frequency. We achieved an overall accuracy of 93% and a mean class accuracy of 88% for pixel-wise segmentation using this encoder-decoder approach. On evaluating the segmentation results using similarity metrics like SSIM and Dice Coefficient, we obtained scores of 0.994 and 0.886 respectively. Additionally, we used the network trained using the 256 ssTEM images of Drosophila third instar larva for multi-class labeling of ISBI 2012 challenge ssTEM dataset.
Region-based multifocus image fusion for the precise acquisition of Pap smear images.
Tello-Mijares, Santiago; Bescós, Jesús
2018-05-01
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Mattioli Della Rocca, Francescopaolo
2018-01-01
This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479
Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks
Xu, Xin; Gui, Rong; Pu, Fangling
2018-01-01
Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods. PMID:29510499
Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks.
Wang, Lei; Xu, Xin; Dong, Hao; Gui, Rong; Pu, Fangling
2018-03-03
Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
A multi-modal stereo microscope based on a spatial light modulator.
Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J
2013-07-15
Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.
The Athena Microscopic Imager Investigation
NASA Technical Reports Server (NTRS)
Herkenhoff, K. E.; Aquyres, S. W.; Bell, J. F., III; Maki, J. N.; Arneson, H. M.; Brown, D. I.; Collins, S. A.; Dingizian, A.; Elliot, S. T.; Geotz, W.
2003-01-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI) [1]. The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD; see Figure 1).The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400 - 700 nm; see Table 1). Technically, the microscopic imager is not a microscope: it has a fixed magnification of 0.4 and is intended to produce images that simulate a geologist s view through a common hand lens. In photographers parlance, the system makes use of a macro lens. The MI uses the same electronics design as the other MER cameras [2, 3] but has optics that yield a field of view of 31 31 mm across a 1024 1024 pixel CCD image (Figure 2). The MI acquires images using only solar or skylightillumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Because the MI has a relatively small depth of field (3 mm), a single MI image of a rough surface will contain both focused and unfocused areas. Coarse focusing will be achieved by moving the IDD away from a rock target after the contact sensor is activated. Multiple images taken at various distances will be acquired to ensure good focus on all parts of rough surfaces. By combining a set of images acquired in this way, a completely focused image can be assembled. Stereoscopic observations can be obtained by moving the MI laterally relative to its boresight. Estimates of the position and orientation of the MI for each acquired image will be stored in the rover computer and returned to Earth with the image data. The MI optics will be protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. The MI will image the same materials measured by other Athena instruments (including surfaces prepared by the Rock Abrasion Tool), as well as rock and soil targets of opportunity. Subsets of the full image array can be selected and/or pixels can be binned to reduce data volume. Image compression will be used to maximize the information contained in the data returned to Earth. The resulting MI data will place other MER instrument data in context and aid in petrologic and geologic interpretations of rocks and soils on Mars.
Radar velocity determination using direction of arrival measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.; Bickel, Douglas L.; Naething, Richard M.
The various technologies presented herein relate to utilizing direction of arrival (DOA) data to determine various flight parameters for an aircraft A plurality of radar images (e.g., SAR images) can be analyzed to identify a plurality of pixels in the radar images relating to one or more ground targets. In an embodiment, the plurality of pixels can be selected based upon the pixels exceeding a SNR threshold. The DOA data in conjunction with a measurable Doppler frequency for each pixel can be obtained. Multi-aperture technology enables derivation of an independent measure of DOA to each pixel based on interferometric analysis.more » This independent measure of DOA enables decoupling of the aircraft velocity from the DOA in a range-Doppler map, thereby enabling determination of a radar velocity. The determined aircraft velocity can be utilized to update an onboard INS, and to keep it aligned, without the need for additional velocity-measuring instrumentation.« less
High dynamic range bio-molecular ion microscopy with the Timepix detector.
Jungmann, Julia H; MacAleese, Luke; Visser, Jan; Vrakking, Marc J J; Heeren, Ron M A
2011-10-15
Highly parallel, active pixel detectors enable novel detection capabilities for large biomolecules in time-of-flight (TOF) based mass spectrometry imaging (MSI). In this work, a 512 × 512 pixel, bare Timepix assembly combined with chevron microchannel plates (MCP) captures time-resolved images of several m/z species in a single measurement. Mass-resolved ion images from Timepix measurements of peptide and protein standards demonstrate the capability to return both mass-spectral and localization information of biologically relevant analytes from matrix-assisted laser desorption ionization (MALDI) on a commercial ion microscope. The use of a MCP-Timepix assembly delivers an increased dynamic range of several orders of magnitude. The Timepix returns defined mass spectra already at subsaturation MCP gains, which prolongs the MCP lifetime and allows the gain to be optimized for image quality. The Timepix peak resolution is only limited by the resolution of the in-pixel measurement clock. Oligomers of the protein ubiquitin were measured up to 78 kDa. © 2011 American Chemical Society
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Hi-fidelity multi-scale local processing for visually optimized far-infrared Herschel images
NASA Astrophysics Data System (ADS)
Li Causi, G.; Schisano, E.; Liu, S. J.; Molinari, S.; Di Giorgio, A.
2016-07-01
In the context of the "Hi-Gal" multi-band full-plane mapping program for the Galactic Plane, as imaged by the Herschel far-infrared satellite, we have developed a semi-automatic tool which produces high definition, high quality color maps optimized for visual perception of extended features, like bubbles and filaments, against the high background variations. We project the map tiles of three selected bands onto a 3-channel panorama, which spans the central 130 degrees of galactic longitude times 2.8 degrees of galactic latitude, at the pixel scale of 3.2", in cartesian galactic coordinates. Then we process this image piecewise, applying a custom multi-scale local stretching algorithm, enforced by a local multi-scale color balance. Finally, we apply an edge-preserving contrast enhancement to perform an artifact-free details sharpening. Thanks to this tool, we have thus produced a stunning giga-pixel color image of the far-infrared Galactic Plane that we made publicly available with the recent release of the Hi-Gal mosaics and compact source catalog.
Multi-Sensor Registration of Earth Remotely Sensed Imagery
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)
2001-01-01
Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).
a New Object-Based Framework to Detect Shodows in High-Resolution Satellite Imagery Over Urban Areas
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn
2017-06-25
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.
A 100 Mfps image sensor for biological applications
NASA Astrophysics Data System (ADS)
Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo
2018-02-01
Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.
NASA Astrophysics Data System (ADS)
Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.
2016-07-01
The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.
Second-harmonic generation microscopy of tooth
NASA Astrophysics Data System (ADS)
Kao, Fu-Jen; Wang, Yung-Shun; Huang, Mao-Kuo; Huang, Sheng-Lung; Cheng, Ping C.
2000-07-01
In this study, we have developed a high performance microscopic system to perform second-harmonic (SH)imaging on a tooth. The high sensitivity of the system allows an acquisition rate of 300 seconds/frame with a resolution at 512x512 pixels. The surface SH signal generated from the tooth is also carefully verified through micro-spectroscopy, polarization rotation, and wavelength tuning. In this way, we can ensure the authenticity of the signal. The enamel that encapsulates the dentine is known to possess highly ordered structures. The anisotrophy of the structure is revealed in the microscopic SH images of the tooth sample.
a Region-Based Multi-Scale Approach for Object-Based Image Analysis
NASA Astrophysics Data System (ADS)
Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.
2016-06-01
Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.
Terahertz imaging with compressive sensing
NASA Astrophysics Data System (ADS)
Chan, Wai Lam
Most existing terahertz imaging systems are generally limited by slow image acquisition due to mechanical raster scanning. Other systems using focal plane detector arrays can acquire images in real time, but are either too costly or limited by low sensitivity in the terahertz frequency range. To design faster and more cost-effective terahertz imaging systems, the first part of this thesis proposes two new terahertz imaging schemes based on compressive sensing (CS). Both schemes can acquire amplitude and phase-contrast images efficiently with a single-pixel detector, thanks to the powerful CS algorithms which enable the reconstruction of N-by- N pixel images with much fewer than N2 measurements. The first CS Fourier imaging approach successfully reconstructs a 64x64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels which defines the image in the Fourier plane. Only about 12% of the pixels are required for reassembling the image of a selected object, equivalent to a 2/3 reduction in acquisition time. The second approach is single-pixel CS imaging, which uses a series of random masks for acquisition. Besides speeding up acquisition with a reduced number of measurements, the single-pixel system can further cut down acquisition time by electrical or optical spatial modulation of random patterns. In order to switch between random patterns at high speed in the single-pixel imaging system, the second part of this thesis implements a multi-pixel electrical spatial modulator for terahertz beams using active terahertz metamaterials. The first generation of this device consists of a 4x4 pixel array, where each pixel is an array of sub-wavelength-sized split-ring resonator elements fabricated on a semiconductor substrate, and is independently controlled by applying an external voltage. The spatial modulator has a uniform modulation depth of around 40 percent across all pixels, and negligible crosstalk, at the resonant frequency. The second-generation spatial terahertz modulator, also based on metamaterials with a higher resolution (32x32), is under development. A FPGA-based circuit is designed to control the large number of modulator pixels. Once fully implemented, this second-generation device will enable fast terahertz imaging with both pulsed and continuous-wave terahertz sources.
A Spectralon BRF Data Base for MISR Calibration Application
NASA Technical Reports Server (NTRS)
Bruegge, C.; Chrien, N.; Haner, D.
1999-01-01
The Multi-angle Imaging SpectroRadiometer (MISR) is an Earth observing sensor which will provide global retrievals of aerosols, clouds, and land surface parameters. Instrument specifications require high accuracy absolute calibration, as well as accurate camera-to-camera, band-to-band and pixel-to-pixel relative response determinations.
Guidelines for Microplate Selection in High Content Imaging.
Trask, Oscar J
2018-01-01
Since the inception of commercialized automated high content screening (HCS) imaging devices in the mid to late 1990s, the adoption of media vessels typically used to house and contain biological specimens for interrogation has transitioned from microscope slides and petri dishes into multi-well microtiter plates called microplates. The early 96- and 384-well microplates commonly used in other high-throughput screening (HTS) technology applications were often not designed for optical imaging. Since then, modifications and the use of next-generation materials with improved optical clarity have enhanced the quality of captured images, reduced autofocusing failures, and empowered the use of higher power magnification objectives to resolve fine detailed measurements at the subcellular pixel level. The plethora of microplates and their applications requires practitioners of high content imaging (HCI) to be especially diligent in the selection and adoption of the best plates for running longitudinal studies or larger screening campaigns. While the highest priority in experimental design is the selection of the biological model, the choice of microplate can alter the biological response and ultimately may change the experimental outcome. This chapter will provide readers with background, troubleshooting guidelines, and considerations for choosing an appropriate microplate.
NASA Astrophysics Data System (ADS)
Preusker, Frank; Stark, Alexander; Oberst, Jürgen; Matz, Klaus-Dieter; Gwinner, Klaus; Roatsch, Thomas; Watters, Thomas R.
2017-08-01
We selected approximately 10,500 narrow-angle camera (NAC) and wide-angle camera (WAC) images of Mercury acquired from orbit by MESSENGER's Mercury Dual Imaging System (MDIS) with an average resolution of 150 m/pixel to compute a digital terrain model (DTM) for the H6 (Kuiper) quadrangle, which extends from 22.5°S to 22.5°N and from 288.0°E to 360.0°E. From the images, we identified about 21,100 stereo image combinations consisting of at least three images each. We applied sparse multi-image matching to derive approximately 250,000 tie-points representing 50,000 ground points. We used the tie-points to carry out a photogrammetric block adjustment, which improves the image pointing and the accuracy of the ground point positions in three dimensions from about 850 m to approximately 55 m. We then applied high-density (pixel-by-pixel) multi-image matching to derive about 45 billion tie-points. Benefitting from improved image pointing data achieved through photogrammetric block adjustment, we computed about 6.3 billion surface points. By interpolation, we generated a DTM with a lateral spacing of 221.7 m/pixel (192 pixels per degree) and a vertical accuracy of about 30 m. The comparison of the DTM with Mercury Laser Altimeter (MLA) profiles obtained over four years of MESSENGER orbital operations reveals that the DTM is geometrically very rigid. It may be used as a reference to identify MLA outliers (e.g., when MLA operated at its ranging limit) or to map offsets of laser altimeter tracks, presumably caused by residual spacecraft orbit and attitude errors. After the relevant outlier removals and corrections, MLA profiles show excellent agreement with topographic profiles from H6, with a root mean square height difference of only 88 m.
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering
Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro
2017-01-01
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.
Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru
2017-11-09
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.
Contrast improvement of terahertz images of thin histopathologic sections
Formanek, Florian; Brun, Marc-Aurèle; Yasuda, Akio
2011-01-01
We present terahertz images of 10 μm thick histopathologic sections obtained in reflection geometry with a time-domain spectrometer, and demonstrate improved contrast for sections measured in paraffin with water. Automated segmentation is applied to the complex refractive index data to generate clustered terahertz images distinguishing cancer from healthy tissues. The degree of classification of pixels is then evaluated using registered visible microscope images. Principal component analysis and propagation simulations are employed to investigate the origin and the gain of image contrast. PMID:21326635
Contrast improvement of terahertz images of thin histopathologic sections.
Formanek, Florian; Brun, Marc-Aurèle; Yasuda, Akio
2010-12-03
We present terahertz images of 10 μm thick histopathologic sections obtained in reflection geometry with a time-domain spectrometer, and demonstrate improved contrast for sections measured in paraffin with water. Automated segmentation is applied to the complex refractive index data to generate clustered terahertz images distinguishing cancer from healthy tissues. The degree of classification of pixels is then evaluated using registered visible microscope images. Principal component analysis and propagation simulations are employed to investigate the origin and the gain of image contrast.
Neural Network for Image-to-Image Control of Optical Tweezers
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Anderson, Robert C.; Weiland, Kenneth E.; Wrbanek, Susan Y.
2004-01-01
A method is discussed for using neural networks to control optical tweezers. Neural-net outputs are combined with scaling and tiling to generate 480 by 480-pixel control patterns for a spatial light modulator (SLM). The SLM can be combined in various ways with a microscope to create movable tweezers traps with controllable profiles. The neural nets are intended to respond to scattered light from carbon and silicon carbide nanotube sensors. The nanotube sensors are to be held by the traps for manipulation and calibration. Scaling and tiling allow the 100 by 100-pixel maximum resolution of the neural-net software to be applied in stages to exploit the full 480 by 480-pixel resolution of the SLM. One of these stages is intended to create sensitive null detectors for detecting variations in the scattered light from the nanotube sensors.
Radiological and histopathological evaluation of experimentally-induced periapical lesion in rats
TEIXEIRA, Renata Cordeiro; RUBIRA, Cassia Maria Fischer; ASSIS, Gerson Francisco; LAURIS, José Roberto Pereira; CESTARI, Tania Mary; RUBIRA-BULLEN, Izabel Regina Fischer
2011-01-01
Objective This study evaluated experimentally-induced periapical bone loss sites using digital radiographic and histopathologic parameters. Material and Methods Twenty-seven Wistar rats were submitted to coronal opening of their mandibular right first molars. They were radiographed at 2, 15 and 30 days after the operative procedure by two digital radiographic storage phosphor plates (Digora®). The images were analyzed by creating a region of interest at the periapical region of each tooth (ImageJ) and registering the corresponding pixel values. After the sacrifice, the specimens were submitted to microscopic analysis in order to confirm the pulpal and periapical status of the tooth. Results There was significant statistically difference between the control and test sides in all the experimental periods regarding the pixel values (two-way ANOVA; p<0.05). Conclusions The microscopic analysis proved that a periapical disease development occurred during the experimental periods with an evolution from pulpal necrosis to periapical bone resorption. PMID:21922123
Jungmann, Julia H; Heeren, Ron M A
2013-01-15
Instrumental developments for imaging and individual particle detection for biomolecular mass spectrometry (imaging) and fundamental atomic and molecular physics studies are reviewed. Ion-counting detectors, array detection systems and high mass detectors for mass spectrometry (imaging) are treated. State-of-the-art detection systems for multi-dimensional ion, electron and photon detection are highlighted. Their application and performance in three different imaging modes--integrated, selected and spectral image detection--are described. Electro-optical and microchannel-plate-based systems are contrasted. The analytical capabilities of solid-state pixel detectors--both charge coupled device (CCD) and complementary metal oxide semiconductor (CMOS) chips--are introduced. The Medipix/Timepix detector family is described as an example of a CMOS hybrid active pixel sensor. Alternative imaging methods for particle detection and their potential for future applications are investigated. Copyright © 2012 John Wiley & Sons, Ltd.
Imaging and identification of waterborne parasites using a chip-scale microscope.
Lee, Seung Ah; Erath, Jessey; Zheng, Guoan; Ou, Xiaoze; Willems, Phil; Eichinger, Daniel; Rodriguez, Ana; Yang, Changhuei
2014-01-01
We demonstrate a compact portable imaging system for the detection of waterborne parasites in resource-limited settings. The previously demonstrated sub-pixel sweeping microscopy (SPSM) technique is a lens-less imaging scheme that can achieve high-resolution (<1 µm) bright-field imaging over a large field-of-view (5.7 mm×4.3 mm). A chip-scale microscope system, based on the SPSM technique, can be used for automated and high-throughput imaging of protozoan parasite cysts for the effective diagnosis of waterborne enteric parasite infection. We successfully imaged and identified three major types of enteric parasite cysts, Giardia, Cryptosporidium, and Entamoeba, which can be found in fecal samples from infected patients. We believe that this compact imaging system can serve well as a diagnostic device in challenging environments, such as rural settings or emergency outbreaks.
NASA Astrophysics Data System (ADS)
Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.
2018-01-01
The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.
Multi-pass transmission electron microscopy
Juffmann, Thomas; Koppell, Stewart A.; Klopfer, Brannon B.; ...
2017-05-10
Feynman once asked physicists to build better electron microscopes to be able to watch biology at work. While electron microscopes can now provide atomic resolution, electron beam induced specimen damage precludes high resolution imaging of sensitive materials, such as single proteins or polymers. Here, we use simulations to show that an electron microscope based on a multi-pass measurement protocol enables imaging of single proteins, without averaging structures over multiple images. While we demonstrate the method for particular imaging targets, the approach is broadly applicable and is expected to improve resolution and sensitivity for a range of electron microscopy imaging modalities,more » including, for example, scanning and spectroscopic techniques. The approach implements a quantum mechanically optimal strategy which under idealized conditions can be considered interaction-free.« less
NASA Astrophysics Data System (ADS)
Huang, Xin; Chen, Huijun; Gong, Jianya
2018-01-01
Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed ADF features can effectively improve the accuracy of urban scene classification, with a significant increase in overall accuracy (3.8-11.7%) compared to using the spectral bands alone. Furthermore, the results indicated the superiority of the proposed ADFs in distinguishing between the spectrally similar and complex man-made classes, including roads and various types of buildings (e.g., high buildings, urban villages, and residential apartments).
Lai, S; Wang, J; Jahng, G H
2001-01-01
A new pulse sequence, dubbed FAIR exempting separate T(1) measurement (FAIREST) in which a slice-selective saturation recovery acquisition is added in addition to the standard FAIR (flow-sensitive alternating inversion recovery) scheme, was developed for quantitative perfusion imaging and multi-contrast fMRI. The technique allows for clean separation between and thus simultaneous assessment of BOLD and perfusion effects, whereas quantitative cerebral blood flow (CBF) and tissue T(1) values are monitored online. Online CBF maps were obtained using the FAIREST technique and the measured CBF values were consistent with the off-line CBF maps obtained from using the FAIR technique in combination with a separate sequence for T(1) measurement. Finger tapping activation studies were carried out to demonstrate the applicability of the FAIREST technique in a typical fMRI setting for multi-contrast fMRI. The relative CBF and BOLD changes induced by finger-tapping were 75.1 +/- 18.3 and 1.8 +/- 0.4%, respectively, and the relative oxygen consumption rate change was 2.5 +/- 7.7%. The results from correlation of the T(1) maps with the activation images on a pixel-by-pixel basis show that the mean T(1) value of the CBF activation pixels is close to the T(1) of gray matter while the mean T(1) value of the BOLD activation pixels is close to the T(1) range of blood and cerebrospinal fluid. Copyright 2001 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yi, Shengzhen; Zhang, Zhe; Huang, Qiushi; Zhang, Zhong; Wang, Zhanshan; Wei, Lai; Liu, Dongxiao; Cao, Leifeng; Gu, Yuqiu
2018-03-01
Multi-channel Kirkpatrick-Baez (KB) microscopes, which have better resolution and collection efficiency than pinhole cameras, have been widely used in laser inertial confinement fusion to diagnose time evolution of the target implosion. In this study, a tandem multi-channel KB microscope was developed to have sixteen imaging channels with the precise control of spatial resolution and image intervals. This precise control was created using a coarse assembly of mirror pairs with high-accuracy optical prisms, followed by precise adjustment in real-time x-ray imaging experiments. The multilayers coated on the KB mirrors were designed to have substantially the same reflectivity to obtain a uniform brightness of different images for laser-plasma temperature analysis. The study provides a practicable method to achieve the optimum performance of the microscope for future high-resolution applications in inertial confinement fusion experiments.
Fast Image Subtraction Using Multi-cores and GPUs
NASA Astrophysics Data System (ADS)
Hartung, Steven; Shukla, H.
2013-01-01
Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.
First Results of the Athena Microscopic Imager Investigation
NASA Technical Reports Server (NTRS)
Herkenhoff, K.; Squyres, S.; Archinal, B.; Arvidson, R.; Bass, D.; Barrett, J.; Becker, K.; Becker, T.; Bell, J., III; Burr, D.
2004-01-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on an extendable arm, the Instrument Deployment Device (IDD). The MI acquires images at a spatial resolution of 30 microns/pixel over a broad spectral range (400 - 700 nm). The MI uses the same electronics design as the other MER cameras but its optics yield a field of view of 31 x 31 mm across a 1024 x 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 69 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (approx. 2 mm precision) is achieved by moving the IDD away from a rock target after contact is sensed. The MI optics are protected from the Martian environment by a retractable dust cover. This cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500 - 700 nm, allowing crude color information to be obtained by acquiring images with the cover open and closed. The MI science objectives, instrument design and calibration, operation, and data processing were described by Herkenhoff et al. Initial results of the MI experiment on both MER rovers ('Spirit' and 'Opportunity') are described below.
Athena Microscopic Imager investigation
NASA Astrophysics Data System (ADS)
Herkenhoff, K. E.; Squyres, S. W.; Bell, J. F.; Maki, J. N.; Arneson, H. M.; Bertelsen, P.; Brown, D. I.; Collins, S. A.; Dingizian, A.; Elliott, S. T.; Goetz, W.; Hagerott, E. C.; Hayes, A. G.; Johnson, M. J.; Kirk, R. L.; McLennan, S.; Morris, R. V.; Scherr, L. M.; Schwochert, M. A.; Shiraishi, L. R.; Smith, G. H.; Soderblom, L. A.; Sohl-Dickstein, J. N.; Wadsworth, M. V.
2003-11-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 × 31 mm across a 1024 × 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (~2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars.
A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm
You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei
2011-01-01
With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770
Ultra-high spatial resolution multi-energy CT using photon counting detector technology
NASA Astrophysics Data System (ADS)
Leng, S.; Gutjahr, R.; Ferrero, A.; Kappler, S.; Henning, A.; Halaweish, A.; Zhou, W.; Montoya, J.; McCollough, C.
2017-03-01
Two ultra-high-resolution (UHR) imaging modes, each with two energy thresholds, were implemented on a research, whole-body photon-counting-detector (PCD) CT scanner, referred to as sharp and UHR, respectively. The UHR mode has a pixel size of 0.25 mm at iso-center for both energy thresholds, with a collimation of 32 × 0.25 mm. The sharp mode has a 0.25 mm pixel for the low-energy threshold and 0.5 mm for the high-energy threshold, with a collimation of 48 × 0.25 mm. Kidney stones with mixed mineral composition and lung nodules with different shapes were scanned using both modes, and with the standard imaging mode, referred to as macro mode (0.5 mm pixel and 32 × 0.5 mm collimation). Evaluation and comparison of the three modes focused on the ability to accurately delineate anatomic structures using the high-spatial resolution capability and the ability to quantify stone composition using the multi-energy capability. The low-energy threshold images of the sharp and UHR modes showed better shape and texture information due to the achieved higher spatial resolution, although noise was also higher. No noticeable benefit was shown in multi-energy analysis using UHR compared to standard resolution (macro mode) when standard doses were used. This was due to excessive noise in the higher resolution images. However, UHR scans at higher dose showed improvement in multi-energy analysis over macro mode with regular dose. To fully take advantage of the higher spatial resolution in multi-energy analysis, either increased radiation dose, or application of noise reduction techniques, is needed.
NASA Astrophysics Data System (ADS)
Li, Jing; Xie, Weixin; Pei, Jihong
2018-03-01
Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.
Ogawa, Shinpei; Kimata, Masafumi
2017-01-01
Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs—periodic crystals, metal-insulator-metal and mushroom-type PMAs—to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications. PMID:28772855
Ogawa, Shinpei; Kimata, Masafumi
2017-05-04
Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs-periodic crystals, metal-insulator-metal and mushroom-type PMAs-to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications.
Nagoshi, Masayasu; Aoyama, Tomohiro; Sato, Kaoru
2013-01-01
Secondary electron microscope (SEM) images have been obtained for practical materials using low primary electron energies and an in-lens type annular detector with changing negative bias voltage supplied to a grid placed in front of the detector. The kinetic-energy distribution of the detected electrons was evaluated by the gradient of the bias-energy dependence of the brightness of the images. This is divided into mainly two parts at about 500 V, high and low brightness in the low- and high-energy regions, respectively and shows difference among the surface regions having different composition and topography. The combination of the negative grid bias and the pixel-by-pixel image subtraction provides the band-pass filtered images and extracts the material and topographic information of the specimen surfaces. Copyright © 2012 Elsevier B.V. All rights reserved.
Star sub-pixel centroid calculation based on multi-step minimum energy difference method
NASA Astrophysics Data System (ADS)
Wang, Duo; Han, YanLi; Sun, Tengfei
2013-09-01
The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better effect.
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
NASA Astrophysics Data System (ADS)
Kuroda, R.; Sugawa, S.
2017-02-01
Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Nakajima, T.; Morimoto, S.; Takenaka, H.
2014-12-01
We have developed a new satellite remote sensing algorithm to retrieve the aerosol optical characteristics using multi-wavelength and multi-pixel information of satellite imagers (MWP method). In this algorithm, the inversion method is a combination of maximum a posteriori (MAP) method (Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, with the progress of computing technique, this method has being combined with the direct radiation transfer calculation numerically solved by each iteration step of the non-linear inverse problem, without using LUT (Look Up Table) with several constraints.Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area.We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. The result of the experiment showed that AOTs of fine mode and coarse mode, soot fraction and ground surface albedo are successfully retrieved within expected accuracy. We discuss the accuracy of the algorithm for various land surface types. Then, we applied this algorithm to GOSAT/CAI imager data, and we compared retrieved and surface-observed AOTs at the CAI pixel closest to an AERONET (Aerosol Robotic Network) or SKYNET site in each region. Comparison at several sites in urban area indicated that AOTs retrieved by our method are in agreement with surface-observed AOT within ±0.066.Our future work is to extend the algorithm for analysis of AGEOS-II/GLI and GCOM/C-SGLI data.
Real-Time Nanoscopy by Using Blinking Enhanced Quantum Dots
Watanabe, Tomonobu M.; Fukui, Shingo; Jin, Takashi; Fujii, Fumihiko; Yanagida, Toshio
2010-01-01
Superresolution optical microscopy (nanoscopy) is of current interest in many biological fields. Superresolution optical fluctuation imaging, which utilizes higher-order cumulant of fluorescence temporal fluctuations, is an excellent method for nanoscopy, as it requires neither complicated optics nor illuminations. However, it does need an impractical number of images for real-time observation. Here, we achieved real-time nanoscopy by modifying superresolution optical fluctuation imaging and enhancing the fluctuation of quantum dots. Our developed quantum dots have higher blinking than commercially available ones. The fluctuation of the blinking improved the resolution when using a variance calculation for each pixel instead of a cumulant calculation. This enabled us to obtain microscopic images with 90-nm and 80-ms spatial-temporal resolution by using a conventional fluorescence microscope without any optics or devices. PMID:20923631
Amini, Kasra; Boll, Rebecca; Lauer, Alexandra; Burt, Michael; Lee, Jason W L; Christensen, Lauge; Brauβe, Felix; Mullins, Terence; Savelyev, Evgeny; Ablikim, Utuq; Berrah, Nora; Bomme, Cédric; Düsterer, Stefan; Erk, Benjamin; Höppner, Hauke; Johnsson, Per; Kierspel, Thomas; Krecinic, Faruk; Küpper, Jochen; Müller, Maria; Müller, Erland; Redlin, Harald; Rouzée, Arnaud; Schirmel, Nora; Thøgersen, Jan; Techert, Simone; Toleikis, Sven; Treusch, Rolf; Trippel, Sebastian; Ulmer, Anatoli; Wiese, Joss; Vallance, Claire; Rudenko, Artem; Stapelfeldt, Henrik; Brouard, Mark; Rolles, Daniel
2017-07-07
Laser-induced adiabatic alignment and mixed-field orientation of 2,6-difluoroiodobenzene (C 6 H 3 F 2 I) molecules are probed by Coulomb explosion imaging following either near-infrared strong-field ionization or extreme-ultraviolet multi-photon inner-shell ionization using free-electron laser pulses. The resulting photoelectrons and fragment ions are captured by a double-sided velocity map imaging spectrometer and projected onto two position-sensitive detectors. The ion side of the spectrometer is equipped with a pixel imaging mass spectrometry camera, a time-stamping pixelated detector that can record the hit positions and arrival times of up to four ions per pixel per acquisition cycle. Thus, the time-of-flight trace and ion momentum distributions for all fragments can be recorded simultaneously. We show that we can obtain a high degree of one-and three-dimensional alignment and mixed-field orientation and compare the Coulomb explosion process induced at both wavelengths.
Super-resolution for imagery from integrated microgrid polarimeters.
Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M
2011-07-04
Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
Improved Scanners for Microscopic Hyperspectral Imaging
NASA Technical Reports Server (NTRS)
Mao, Chengye
2009-01-01
Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version, the window would be a slit, the CCD would contain a one-dimensional array of pixels, and the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion. The image built up by scanning in this case would be an ordinary (non-spectral) image. In another version, the optics of which are depicted in the lower part of the figure, the spatial window would be a slit, the CCD would contain a two-dimensional array of pixels, the slit image would be refocused onto the CCD by a relay-lens pair consisting of a collimating and a focusing lens, and a prism-gratingprism optical spectrometer would be placed between the collimating and focusing lenses. Consequently, the image on the CCD would be spatially resolved along the slit axis and spectrally resolved along the axis perpendicular to the slit. As in the first-mentioned version, the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion.
NASA Astrophysics Data System (ADS)
Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree; Sadhu, Anup; Arif, Wasim
2018-02-01
In this paper, Curvelet based local attributes, Curvelet-Local configuration pattern (C-LCP), is introduced for the characterization of mammographic masses as benign or malignant. Amid different anomalies such as micro- calcification, bilateral asymmetry, architectural distortion, and masses, the reason for targeting the mass lesions is due to their variation in shape, size, and margin which makes the diagnosis a challenging task. Being efficient in classification, multi-resolution property of the Curvelet transform is exploited and local information is extracted from the coefficients of each subband using Local configuration pattern (LCP). The microscopic measures in concatenation with the local textural information provide more discriminating capability than individual. The measures embody the magnitude information along with the pixel-wise relationships among the neighboring pixels. The performance analysis is conducted with 200 mammograms of the DDSM database containing 100 mass cases of each benign and malignant. The optimal set of features is acquired via stepwise logistic regression method and the classification is carried out with Fisher linear discriminant analysis. The best area under the receiver operating characteristic curve and accuracy of 0.95 and 87.55% are achieved with the proposed method, which is further compared with some of the state-of-the-art competing methods.
NASA Astrophysics Data System (ADS)
Jiang, Feng; Gu, Qing; Hao, Huizhen; Li, Na; Wang, Bingqian; Hu, Xiumian
2018-06-01
Automatic grain segmentation of sandstone is to partition mineral grains into separate regions in the thin section, which is the first step for computer aided mineral identification and sandstone classification. The sandstone microscopic images contain a large number of mixed mineral grains where differences among adjacent grains, i.e., quartz, feldspar and lithic grains, are usually ambiguous, which make grain segmentation difficult. In this paper, we take advantage of multi-angle cross-polarized microscopic images and propose a method for grain segmentation with high accuracy. The method consists of two stages, in the first stage, we enhance the SLIC (Simple Linear Iterative Clustering) algorithm, named MSLIC, to make use of multi-angle images and segment the images as boundary adherent superpixels. In the second stage, we propose the region merging technique which combines the coarse merging and fine merging algorithms. The coarse merging merges the adjacent superpixels with less evident boundaries, and the fine merging merges the ambiguous superpixels using the spatial enhanced fuzzy clustering. Experiments are designed on 9 sets of multi-angle cross-polarized images taken from the three major types of sandstones. The results demonstrate both the effectiveness and potential of the proposed method, comparing to the available segmentation methods.
Two-Photon Imaging with Diffractive Optical Elements
Watson, Brendon O.; Nikolenko, Volodymyr; Yuste, Rafael
2009-01-01
Two-photon imaging has become a useful tool for optical monitoring of neural circuits, but it requires high laser power and serial scanning of each pixel in a sample. This results in slow imaging rates, limiting the measurements of fast signals such as neuronal activity. To improve the speed and signal-to-noise ratio of two-photon imaging, we introduce a simple modification of a two-photon microscope, using a diffractive optical element (DOE) which splits the laser beam into several beamlets that can simultaneously scan the sample. We demonstrate the advantages of DOE scanning by enhancing the speed and sensitivity of two-photon calcium imaging of action potentials in neurons from neocortical brain slices. DOE scanning can easily improve the detection of time-varying signals in two-photon and other non-linear microscopic techniques. PMID:19636390
Graphene metamaterial spatial light modulator for infrared single pixel imaging.
Fan, Kebin; Suen, Jonathan Y; Padilla, Willie J
2017-10-16
High-resolution and hyperspectral imaging has long been a goal for multi-dimensional data fusion sensing applications - of interest for autonomous vehicles and environmental monitoring. In the long wave infrared regime this quest has been impeded by size, weight, power, and cost issues, especially as focal-plane array detector sizes increase. Here we propose and experimentally demonstrated a new approach based on a metamaterial graphene spatial light modulator (GSLM) for infrared single pixel imaging. A frequency-division multiplexing (FDM) imaging technique is designed and implemented, and relies entirely on the electronic reconfigurability of the GSLM. We compare our approach to the more common raster-scan method and directly show FDM image frame rates can be 64 times faster with no degradation of image quality. Our device and related imaging architecture are not restricted to the infrared regime, and may be scaled to other bands of the electromagnetic spectrum. The study presented here opens a new approach for fast and efficient single pixel imaging utilizing graphene metamaterials with novel acquisition strategies.
Electron microscopy of whole cells in liquid with nanometer resolution
de Jonge, N.; Peckys, D. B.; Kremers, G. J.; Piston, D. W.
2009-01-01
Single gold-tagged epidermal growth factor (EGF) molecules bound to cellular EGF receptors of fixed fibroblast cells were imaged in liquid with a scanning transmission electron microscope (STEM). The cells were placed in buffer solution in a microfluidic device with electron transparent windows inside the vacuum of the electron microscope. A spatial resolution of 4 nm and a pixel dwell time of 20 μs were obtained. The liquid layer was sufficiently thick to contain the cells with a thickness of 7 ± 1 μm. The experimental findings are consistent with a theoretical calculation. Liquid STEM is a unique approach for imaging single molecules in whole cells with significantly improved resolution and imaging speed over existing methods. PMID:19164524
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
Classification of breast cancer cytological specimen using convolutional neural network
NASA Astrophysics Data System (ADS)
Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman
2017-01-01
The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.
Super-Resolution of Multi-Pixel and Sub-Pixel Images for the SDI
1993-06-08
where the phase of the transmitted signal is not needed. The Wigner - Ville distribution ( WVD ) of a real signal s(t), associated with the complex...B. Boashash, 0. P. Kenny and H. J. Whitehouse, "Radar imaging using the Wigner - Ville distribution ", in Real-Time Signal Processing, J. P. Letellier...analytic signal z(t), is a time- frequency distribution defined as-’- 00 W(tf) Z (~t + ) t- -)exp(-i2nft) . (45) Note that the WVD is the double Fourier
SCAPS, a two-dimensional ion detector for mass spectrometer
NASA Astrophysics Data System (ADS)
Yurimoto, Hisayoshi
2014-05-01
Faraday Cup (FC) and electron multiplier (EM) are of the most popular ion detector for mass spectrometer. FC is used for high-count-rate ion measurements and EM can detect from single ion. However, FC is difficult to detect lower intensities less than kilo-cps, and EM loses ion counts higher than Mega-cps. Thus, FC and EM are used complementary each other, but they both belong to zero-dimensional detector. On the other hand, micro channel plate (MCP) is a popular ion signal amplifier with two-dimensional capability, but additional detection system must be attached to detect the amplified signals. Two-dimensional readout for the MCP signals, however, have not achieve the level of FC and EM systems. A stacked CMOS active pixel sensor (SCAPS) has been developed to detect two-dimensional ion variations for a spatial area using semiconductor technology [1-8]. The SCAPS is an integrated type multi-detector, which is different from EM and FC, and is composed of more than 500×500 pixels (micro-detectors) for imaging of cm-area with a pixel of less than 20 µm in square. The SCAPS can be detected from single ion to 100 kilo-count ions per one pixel. Thus, SCAPS can be accumulated up to several giga-count ions for total pixels, i.e. for total imaging area. The SCAPS has been applied to stigmatic ion optics of secondary ion mass spectrometer, as a detector of isotope microscope [9]. The isotope microscope has capabilities of quantitative isotope images of hundred-micrometer area on a sample with sub-micrometer resolution and permil precision, and of two-dimensional mass spectrum on cm-scale of mass dispersion plane of a sector magnet with ten-micrometer resolution. The performance has been applied to two-dimensional isotope spatial distribution for mainly hydrogen, carbon, nitrogen and oxygen of natural (extra-terrestrial and terrestrial) samples and samples simulated natural processes [e.g. 10-17]. References: [1] Matsumoto, K., et al. (1993) IEEE Trans. Electron Dev. 40, 82-85. [2] Takayanagi et al. (1999) Proc. 1999 IEEE workshop on Charge-Coupled Devices and Advanced Image Sensors, 159-162. [3] Kunihiro et al. (2001) Nucl. Instrum. Methods Phys. Res. Sec. A 470, 512-519. [4] Nagashima et al. (2001) Surface Interface Anal. 31, 131-137. [5] Takayanagi et al. (2003) IEEE Trans. Electron Dev. 50, 70- 76. [6] Sakamoto and Yurimoto (2006) Surface Interface Anal. 38, 1760-1762. [7] Yamamoto et al. (2010) Surface Interface Anal. 42, 1603-1605. [8] Sakamoto et al. (2012) Jpn. J. Appl. Phys. 51, 076701. [9] Yurimoto et al. (2003) Appl. Surf. Sci. 203-204, 793-797. [10] Nagashima et al. (2004) Nature 428, 921-924. [11] Kunihiro et al. (2005) Geochim. Cosmochim. Acta 69, 763-773. [12] Nakamura et al. (2005) Geology 33, 829-832. [13] Sakamoto et al. (2007) Science 317, 231-233. [14] Greenwood et al. (2008) Geophys. Res. Lett., 35, L05203. [15] Greenwood et al. (2011) Nature Geoscience 4, 79-82. [16] Park et al. (2012) Meteorit. Planet. Sci. 47, 2070-2083. [17] Hashiguchi et al. (2013) Geochim. Cosmochim. Acta. 122, 306-323.
Guo, Bing-bing; Zheng, Xiao-lin; Lu, Zhen-gang; Wang, Xing; Yin, Zheng-qin; Hou, Wen-sheng; Meng, Ming
2015-01-01
Visual cortical prostheses have the potential to restore partial vision. Still limited by the low-resolution visual percepts provided by visual cortical prostheses, implant wearers can currently only “see” pixelized images, and how to obtain the specific brain responses to different pixelized images in the primary visual cortex (the implant area) is still unknown. We conducted a functional magnetic resonance imaging experiment on normal human participants to investigate the brain activation patterns in response to 18 different pixelized images. There were 100 voxels in the brain activation pattern that were selected from the primary visual cortex, and voxel size was 4 mm × 4 mm × 4 mm. Multi-voxel pattern analysis was used to test if these 18 different brain activation patterns were specific. We chose a Linear Support Vector Machine (LSVM) as the classifier in this study. The results showed that the classification accuracies of different brain activation patterns were significantly above chance level, which suggests that the classifier can successfully distinguish the brain activation patterns. Our results suggest that the specific brain activation patterns to different pixelized images can be obtained in the primary visual cortex using a 4 mm × 4 mm × 4 mm voxel size and a 100-voxel pattern. PMID:26692860
NASA Astrophysics Data System (ADS)
Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang
2018-04-01
In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.
Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai
2015-06-30
Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.
NASA Astrophysics Data System (ADS)
Yao, Wei; van Aardt, Jan; Messinger, David
2017-05-01
The Hyperspectral Infrared Imager (HyspIRI) mission aims to provide global imaging spectroscopy data to the benefit of especially ecosystem studies. The onboard spectrometer will collect radiance spectra from the visible to short wave infrared (VSWIR) regions (400-2500 nm). The mission calls for fine spectral resolution (10 nm band width) and as such will enable scientists to perform material characterization, species classification, and even sub-pixel mapping. However, the global coverage requirement results in a relatively low spatial resolution (GSD 30m), which restricts applications to objects of similar scales. We therefore have focused on the assessment of sub-pixel vegetation structure from spectroscopy data in past studies. In this study, we investigate the development or reconstruction of higher spatial resolution imaging spectroscopy data via fusion of multi-temporal data sets to address the drawbacks implicit in low spatial resolution imagery. The projected temporal resolution of the HyspIRI VSWIR instrument is 15 days, which implies that we have access to as many as six data sets for an area over the course of a growth season. Previous studies have shown that select vegetation structural parameters, e.g., leaf area index (LAI) and gross ecosystem production (GEP), are relatively constant in summer and winter for temperate forests; we therefore consider the data sets collected in summer to be from a similar, stable forest structure. The first step, prior to fusion, involves registration of the multi-temporal data. A data fusion algorithm then can be applied to the pre-processed data sets. The approach hinges on an algorithm that has been widely applied to fuse RGB images. Ideally, if we have four images of a scene which all meet the following requirements - i) they are captured with the same camera configurations; ii) the pixel size of each image is x; and iii) at least r2 images are aligned on a grid of x/r - then a high-resolution image, with a pixel size of x/r, can be reconstructed from the multi-temporal set. The algorithm was applied to data from NASA's classic Airborne Visible and Infrared Imaging Spectrometer (AVIRIS-C; GSD 18m), collected between 2013-2015 (summer and fall) over our study area (NEON's Southwest Pacific Domain; Fresno, CA) to generate higher spatial resolution imagery (GSD 9m). The reconstructed data set was validated via comparison to NEON's imaging spectrometer (NIS) data (GSD 1m). The results showed that algorithm worked well with the AVIRIS-C data and could be applied to the HyspIRI data.
ASTER First Views of Red Sea, Ethiopia - Thermal-Infrared TIR Image monochrome
2000-03-11
ASTER succeeded in acquiring this image at night, which is something Visible/Near Infrared VNIR) and Shortwave Infrared (SWIR) sensors cannot do. The scene covers the Red Sea coastline to an inland area of Ethiopia. White pixels represent areas with higher temperature material on the surface, while dark pixels indicate lower temperatures. This image shows ASTER's ability as a highly sensitive, temperature-discerning instrument and the first spaceborne TIR multi-band sensor in history. The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately. http://photojournal.jpl.nasa.gov/catalog/PIA02452
A Combined Laser-Communication and Imager for Microspacecraft (ACLAIM)
NASA Technical Reports Server (NTRS)
Hemmati, H.; Lesh, J.
1998-01-01
ACLAIM is a multi-function instrument consisting of a laser communication terminal and an imaging camera that share a common telescope. A single APS- (Active Pixel Sensor) based focal-plane-array is used to perform both the acquisition and tracking (for laser communication) and science imaging functions.
Compact multi-band fluorescent microscope with an electrically tunable lens for autofocusing
Wang, Zhaojun; Lei, Ming; Yao, Baoli; Cai, Yanan; Liang, Yansheng; Yang, Yanlong; Yang, Xibin; Li, Hui; Xiong, Daxi
2015-01-01
Autofocusing is a routine technique in redressing focus drift that occurs in time-lapse microscopic image acquisition. To date, most automatic microscopes are designed on the distance detection scheme to fulfill the autofocusing operation, which may suffer from the low contrast of the reflected signal due to the refractive index mismatch at the water/glass interface. To achieve high autofocusing speed with minimal motion artifacts, we developed a compact multi-band fluorescent microscope with an electrically tunable lens (ETL) device for autofocusing. A modified searching algorithm based on equidistant scanning and curve fitting is proposed, which no longer requires a single-peak focus curve and then efficiently restrains the impact of external disturbance. This technique enables us to achieve an autofocusing time of down to 170 ms and the reproductivity of over 97%. The imaging head of the microscope has dimensions of 12 cm × 12 cm × 6 cm. This portable instrument can easily fit inside standard incubators for real-time imaging of living specimens. PMID:26601001
Meng, Xin; Huang, Huachuan; Yan, Keding; Tian, Xiaolin; Yu, Wei; Cui, Haoyang; Kong, Yan; Xue, Liang; Liu, Cheng; Wang, Shouyu
2016-12-20
In order to realize high contrast imaging with portable devices for potential mobile healthcare, we demonstrate a hand-held smartphone based quantitative phase microscope using the transport of intensity equation method. With a cost-effective illumination source and compact microscope system, multi-focal images of samples can be captured by the smartphone's camera via manual focusing. Phase retrieval is performed using a self-developed Android application, which calculates sample phases from multi-plane intensities via solving the Poisson equation. We test the portable microscope using a random phase plate with known phases, and to further demonstrate its performance, a red blood cell smear, a Pap smear and monocot root and broad bean epidermis sections are also successfully imaged. Considering its advantages as an accurate, high-contrast, cost-effective and field-portable device, the smartphone based hand-held quantitative phase microscope is a promising tool which can be adopted in the future in remote healthcare and medical diagnosis.
Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest
NASA Astrophysics Data System (ADS)
Feng, W.; Sui, H.; Chen, X.
2018-04-01
Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.
A scalable multi-DLP pico-projector system for virtual reality
NASA Astrophysics Data System (ADS)
Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.
2014-03-01
Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.
McLeod, Euan; Luo, Wei; Mudanyali, Onur; Greenbaum, Alon
2013-01-01
The development of lensfree on-chip microscopy in the past decade has opened up various new possibilities for biomedical imaging across ultra-large fields of view using compact, portable, and cost-effective devices. However, until recently, its ability to resolve fine features and detect ultra-small particles has not rivalled the capabilities of the more expensive and bulky laboratory-grade optical microscopes. In this Frontier Review, we highlight the developments over the last two years that have enabled computational lensfree holographic on-chip microscopy to compete with and, in some cases, surpass conventional bright-field microscopy in its ability to image nano-scale objects across large fields of view, yielding giga-pixel phase and amplitude images. Lensfree microscopy has now achieved a numerical aperture as high as 0.92, with a spatial resolution as small as 225 nm across a large field of view e.g., >20 mm2. Furthermore, the combination of lensfree microscopy with self-assembled nanolenses, forming nano-catenoid minimal surfaces around individual nanoparticles has boosted the image contrast to levels high enough to permit bright-field imaging of individual particles smaller than 100 nm. These capabilities support a number of new applications, including, for example, the detection and sizing of individual virus particles using field-portable computational on-chip microscopes. PMID:23592185
McLeod, Euan; Luo, Wei; Mudanyali, Onur; Greenbaum, Alon; Ozcan, Aydogan
2013-06-07
The development of lensfree on-chip microscopy in the past decade has opened up various new possibilities for biomedical imaging across ultra-large fields of view using compact, portable, and cost-effective devices. However, until recently, its ability to resolve fine features and detect ultra-small particles has not rivalled the capabilities of the more expensive and bulky laboratory-grade optical microscopes. In this Frontier Review, we highlight the developments over the last two years that have enabled computational lensfree holographic on-chip microscopy to compete with and, in some cases, surpass conventional bright-field microscopy in its ability to image nano-scale objects across large fields of view, yielding giga-pixel phase and amplitude images. Lensfree microscopy has now achieved a numerical aperture as high as 0.92, with a spatial resolution as small as 225 nm across a large field of view e.g., >20 mm(2). Furthermore, the combination of lensfree microscopy with self-assembled nanolenses, forming nano-catenoid minimal surfaces around individual nanoparticles has boosted the image contrast to levels high enough to permit bright-field imaging of individual particles smaller than 100 nm. These capabilities support a number of new applications, including, for example, the detection and sizing of individual virus particles using field-portable computational on-chip microscopes.
III-V infrared research at the Jet Propulsion Laboratory
NASA Astrophysics Data System (ADS)
Gunapala, S. D.; Ting, D. Z.; Hill, C. J.; Soibel, A.; Liu, John; Liu, J. K.; Mumolo, J. M.; Keo, S. A.; Nguyen, J.; Bandara, S. V.; Tidrow, M. Z.
2009-08-01
Jet Propulsion Laboratory is actively developing the III-V based infrared detector and focal plane arrays (FPAs) for NASA, DoD, and commercial applications. Currently, we are working on multi-band Quantum Well Infrared Photodetectors (QWIPs), Superlattice detectors, and Quantum Dot Infrared Photodetector (QDIPs) technologies suitable for high pixel-pixel uniformity and high pixel operability large area imaging arrays. In this paper we report the first demonstration of the megapixel-simultaneously-readable and pixel-co-registered dual-band QWIP focal plane array (FPA). In addition, we will present the latest advances in QDIPs and Superlattice infrared detectors at the Jet Propulsion Laboratory.
Athena microscopic Imager investigation
Herkenhoff, K. E.; Squyres, S. W.; Bell, J.F.; Maki, J.N.; Arneson, H.M.; Bertelsen, P.; Brown, D.I.; Collins, S.A.; Dingizian, A.; Elliott, S.T.; Goetz, W.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Kirk, R.L.; McLennan, S.; Morris, R.V.; Scherr, L.M.; Schwochert, M.A.; Shiraishi, L.R.; Smith, G.H.; Soderblom, L.A.; Sohl-Dickstein, J. N.; Wadsworth, M.V.
2003-01-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 ?? 31 mm across a 1024 ?? 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (???2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars. Copyright 2003 by the American Geophysical Union.
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
AOTF hyperspectral microscopic imaging for foodborne pathogenic bacteria detection
NASA Astrophysics Data System (ADS)
Park, Bosoon; Lee, Sangdae; Yoon, Seung-Chul; Sundaram, Jaya; Windham, William R.; Hinton, Arthur, Jr.; Lawrence, Kurt C.
2011-06-01
Hyperspectral microscope imaging (HMI) method which provides both spatial and spectral information can be effective for foodborne pathogen detection. The AOTF-based hyperspectral microscope imaging method can be used to characterize spectral properties of biofilm formed by Salmonella enteritidis as well as Escherichia coli. The intensity of spectral imagery and the pattern of spectral distribution varied with system parameters (integration time and gain) of HMI system. The preliminary results demonstrated determination of optimum parameter values of HMI system and the integration time must be no more than 250 ms for quality image acquisition from biofilm formed by S. enteritidis. Among the contiguous spectral imagery between 450 and 800 nm, the intensity of spectral images at 498, 522, 550 and 594 nm were distinctive for biofilm; whereas, the intensity of spectral images at 546 nm was distinctive for E. coli. For more accurate comparison of intensity from spectral images, a calibration protocol, using neutral density filters and multiple exposures, need to be developed to standardize image acquisition. For the identification or classification of unknown food pathogen samples, ground truth regions-of-interest pixels need to be selected for "spectrally pure fingerprints" for the Salmonella and E. coli species.
Electrically stimulated contractions of Vorticella convallaria
NASA Astrophysics Data System (ADS)
Kantha, Deependra; van Winkle, David
2009-03-01
The contraction of Vorticella convallaria was triggered by applying a voltage pulse in its host culturing medium. The 50V, 1ms wide pulse was applied across platinum wires separated by 0.7 cm on a microscope slide. The contractions were recorded as cines (image sequences) by a Phantom V5 camera (Vision Research) on a bright field microscope with 20X objective, with the image size of 256 pixels x 128 pixels at 7352 pictures per second. The starting time of the cines was synchronized with the starting of the electrical pulse. We recorded five contractions of each of 12 organisms. The cines were analyzed to obtain the initiation time, defined as the difference in time between the leading edge of the electrical pulse and the first frame showing zooid movement. From multiple contractions of same organism, we found the initiation time is reproducible. In comparing different organisms, we found the average initiation time of 1.73 ms with a standard deviation of 0.63 ms. This research is supported by the state of Florida (MARTECH) and Research Corporation.
Applications of holographic on-chip microscopy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Ozcan, Aydogan
2017-02-01
My research focuses on the use of computation/algorithms to create new optical microscopy, sensing, and diagnostic techniques, significantly improving existing tools for probing micro- and nano-objects while also simplifying the designs of these analysis tools. In this presentation, I will introduce a set of computational microscopes which use lens-free on-chip imaging to replace traditional lenses with holographic reconstruction algorithms. Basically, 3D images of specimens are reconstructed from their "shadows" providing considerably improved field-of-view (FOV) and depth-of-field, thus enabling large sample volumes to be rapidly imaged, even at nanoscale. These new computational microscopes routinely generate <1-2 billion pixels (giga-pixels), where even single viruses can be detected with a FOV that is <100 fold wider than other techniques. At the heart of this leapfrog performance lie self-assembled liquid nano-lenses that are computationally imaged on a chip. The field-of-view of these computational microscopes is equal to the active-area of the sensor-array, easily reaching, for example, <20 mm^2 or <10 cm^2 by employing state-of-the-art CMOS or CCD imaging chips, respectively. In addition to this remarkable increase in throughput, another major benefit of this technology is that it lends itself to field-portable and cost-effective designs which easily integrate with smartphones to conduct giga-pixel tele-pathology and microscopy even in resource-poor and remote settings where traditional techniques are difficult to implement and sustain, thus opening the door to various telemedicine applications in global health. Through the development of similar computational imagers, I will also report the discovery of new 3D swimming patterns observed in human and animal sperm. One of this newly discovered and extremely rare motion is in the form of "chiral ribbons" where the planar swings of the sperm head occur on an osculating plane creating in some cases a helical ribbon and in some others a twisted ribbon. Shedding light onto the statistics and biophysics of various micro-swimmers' 3D motion, these results provide an important example of how biomedical imaging significantly benefits from emerging computational algorithms/theories, revolutionizing existing tools for observing various micro- and nano-scale phenomena in innovative, high-throughput, and yet cost-effective ways.
BigView Image Viewing on Tiled Displays
NASA Technical Reports Server (NTRS)
Sandstrom, Timothy
2007-01-01
BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.
Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sim, K S; Teh, V; Tey, Y C; Kho, T K
2016-11-01
This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio
2017-05-01
WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.
NASA Astrophysics Data System (ADS)
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki
2017-02-01
Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.
A design of optical modulation system with pixel-level modulation accuracy
NASA Astrophysics Data System (ADS)
Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu
2018-01-01
Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.
Image Description with Local Patterns: An Application to Face Recognition
NASA Astrophysics Data System (ADS)
Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro
In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.
Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar
2014-01-01
Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Furukawa, Isao; Okumura, Akira; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu; Suzuki, Junji; Matsuya, Shoji; Ishihara, Teruo
2001-08-01
The wide spread of digital technology in the medical field has led to a demand for the high-quality, high-speed, and user-friendly digital image presentation system in the daily medical conferences. To fulfill this demand, we developed a presentation system for radiological and pathological images. It is composed of a super-high-definition (SHD) imaging system, a radiological image database (R-DB), a pathological image database (P-DB), and the network interconnecting these three. The R-DB consists of a 270GB RAID, a database server workstation, and a film digitizer. The P-DB includes an optical microscope, a four-million-pixel digital camera, a 90GB RAID, and a database server workstation. A 100Mbps Ethernet LAN interconnects all the sub-systems. The Web-based system operation software was developed for easy operation. We installed the whole system in NTT East Kanto Hospital to evaluate it in the weekly case conferences. The SHD system could display digital full-color images of 2048 x 2048 pixels on a 28-inch CRT monitor. The doctors evaluated the image quality and size, and found them applicable to the actual medical diagnosis. They also appreciated short image switching time that contributed to smooth presentation. Thus, we confirmed that its characteristics met the requirements.
Automated Detection of Synapses in Serial Section Transmission Electron Microscopy Image Stacks
Kreshuk, Anna; Koethe, Ullrich; Pax, Elizabeth; Bock, Davi D.; Hamprecht, Fred A.
2014-01-01
We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on object-level features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution) of mouse visual cortex, manually labeled by three independent human annotators and additionally re-verified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections) is better than state-of-the-art, even though, unlike the state-of-the-art method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (http://www.ilastik.org/synapse-detection/sstem). PMID:24516550
NASA Technical Reports Server (NTRS)
Brown, Alison M.
2005-01-01
Solar System Visualization products enable scientists to compare models and measurements in new ways that enhance the scientific discovery process, enhance the information content and understanding of the science results for both science colleagues and the public, and create.visually appealing and intellectually stimulating visualization products. Missions supported include MER, MRO, and Cassini. Image products produced include pan and zoom animations of large mosaics to reveal the details of surface features and topography, animations into registered multi-resolution mosaics to provide context for microscopic images, 3D anaglyphs from left and right stereo pairs, and screen captures from video footage. Specific products include a three-part context animation of the Cassini Enceladus encounter highlighting images from 350 to 4 meter per pixel resolution; Mars Reconnaissance Orbiter screen captures illustrating various instruments during assembly and testing at the Payload Hazardous Servicing Facility at Kennedy Space Center; and an animation of Mars Exploration Rover Opportunity's 'Rub al Khali' panorama where the rover was stuck in the deep fine sand for more than a month. This task creates new visualization products that enable new science results and enhance the public's understanding of the Solar System and NASA's missions of exploration.
A bio-image sensor for simultaneous detection of multi-neurotransmitters.
Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki
2018-03-01
We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H + ) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H + sensor to minimize the undesired signal overlap by H + diffusion. Using this bio-image sensor, we can obtain H + diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.
Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.
Zhao, Liang; Wu, Wei; Corso, Jason J
2013-01-01
Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.
Integrated fluorescence analysis system
Buican, Tudor N.; Yoshida, Thomas M.
1992-01-01
An integrated fluorescence analysis system enables a component part of a sample to be virtually sorted within a sample volume after a spectrum of the component part has been identified from a fluorescence spectrum of the entire sample in a flow cytometer. Birefringent optics enables the entire spectrum to be resolved into a set of numbers representing the intensity of spectral components of the spectrum. One or more spectral components are selected to program a scanning laser microscope, preferably a confocal microscope, whereby the spectrum from individual pixels or voxels in the sample can be compared. Individual pixels or voxels containing the selected spectral components are identified and an image may be formed to show the morphology of the sample with respect to only those components having the selected spectral components. There is no need for any physical sorting of the sample components to obtain the morphological information.
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array.
Phillips, Zachary F; D'Ambrosio, Michael V; Tian, Lei; Rulison, Jared J; Patel, Hurshal S; Sadras, Nitin; Gande, Aditya V; Switz, Neil A; Fletcher, Daniel A; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope--a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities.
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array
Phillips, Zachary F.; D'Ambrosio, Michael V.; Tian, Lei; Rulison, Jared J.; Patel, Hurshal S.; Sadras, Nitin; Gande, Aditya V.; Switz, Neil A.; Fletcher, Daniel A.; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope—a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities. PMID:25969980
Guan, Zeyi; Lee, Juhyun; Jiang, Hao; Dong, Siyan; Jen, Nelson; Hsiai, Tzung; Ho, Chih-Ming; Fei, Peng
2015-01-01
We developed a compact plane illumination plugin (PIP) device which enabled plane illumination and light sheet fluorescence imaging on a conventional inverted microscope. The PIP device allowed the integration of microscope with tunable laser sheet profile, fast image acquisition, and 3-D scanning. The device is both compact, measuring approximately 15 by 5 by 5 cm, and cost-effective, since we employed consumer electronics and an inexpensive device molding method. We demonstrated that PIP provided significant contrast and resolution enhancement to conventional microscopy through imaging different multi-cellular fluorescent structures, including 3-D branched cells in vitro and live zebrafish embryos. Imaging with the integration of PIP greatly reduced out-of-focus contamination and generated sharper contrast in acquired 2-D plane images when compared with the stand-alone inverted microscope. As a result, the dynamic fluid domain of the beating zebrafish heart was clearly segmented and the functional monitoring of the heart was achieved. Furthermore, the enhanced axial resolution established by thin plane illumination of PIP enabled the 3-D reconstruction of the branched cellular structures, which leads to the improvement on the functionality of the wide field microscopy. PMID:26819828
Guan, Zeyi; Lee, Juhyun; Jiang, Hao; Dong, Siyan; Jen, Nelson; Hsiai, Tzung; Ho, Chih-Ming; Fei, Peng
2016-01-01
We developed a compact plane illumination plugin (PIP) device which enabled plane illumination and light sheet fluorescence imaging on a conventional inverted microscope. The PIP device allowed the integration of microscope with tunable laser sheet profile, fast image acquisition, and 3-D scanning. The device is both compact, measuring approximately 15 by 5 by 5 cm, and cost-effective, since we employed consumer electronics and an inexpensive device molding method. We demonstrated that PIP provided significant contrast and resolution enhancement to conventional microscopy through imaging different multi-cellular fluorescent structures, including 3-D branched cells in vitro and live zebrafish embryos. Imaging with the integration of PIP greatly reduced out-of-focus contamination and generated sharper contrast in acquired 2-D plane images when compared with the stand-alone inverted microscope. As a result, the dynamic fluid domain of the beating zebrafish heart was clearly segmented and the functional monitoring of the heart was achieved. Furthermore, the enhanced axial resolution established by thin plane illumination of PIP enabled the 3-D reconstruction of the branched cellular structures, which leads to the improvement on the functionality of the wide field microscopy.
NASA Astrophysics Data System (ADS)
Shen, Wei; Zhao, Kai; Jiang, Yuan; Wang, Yan; Bai, Xiang; Yuille, Alan
2017-11-01
Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the groundtruth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. Additionally, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: Foreground object segmentation and object proposal detection.
Development of multi-pixel x-ray source using oxide-coated cathodes.
Kandlakunta, Praneeth; Pham, Richard; Khan, Rao; Zhang, Tiezhi
2017-07-07
Multiple pixel x-ray sources facilitate new designs of imaging modalities that may result in faster imaging speed, improved image quality, and more compact geometry. We are developing a high-brightness multiple-pixel thermionic emission x-ray (MPTEX) source based on oxide-coated cathodes. Oxide cathodes have high emission efficiency and, thereby, produce high emission current density at low temperature when compared to traditional tungsten filaments. Indirectly heated micro-rectangular oxide cathodes were developed using carbonates, which were converted to semiconductor oxides of barium, strontium, and calcium after activation. Each cathode produces a focal spot on an elongated fixed anode. The x-ray beam ON and OFF control is performed by source-switching electronics, which supplies bias voltage to the cathode emitters. In this paper, we report the initial performance of the oxide-coated cathodes and the MPTEX source.
Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks
NASA Astrophysics Data System (ADS)
Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie
2017-03-01
Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.
NASA Astrophysics Data System (ADS)
Champion, N.
2012-08-01
Contrary to aerial images, satellite images are often affected by the presence of clouds. Identifying and removing these clouds is one of the primary steps to perform when processing satellite images, as they may alter subsequent procedures such as atmospheric corrections, DSM production or land cover classification. The main goal of this paper is to present the cloud detection approach, developed at the French Mapping agency. Our approach is based on the availability of multi-temporal satellite images (i.e. time series that generally contain between 5 and 10 images) and is based on a region-growing procedure. Seeds (corresponding to clouds) are firstly extracted through a pixel-to-pixel comparison between the images contained in time series (the presence of a cloud is here assumed to be related to a high variation of reflectance between two images). Clouds are then delineated finely using a dedicated region-growing algorithm. The method, originally designed for panchromatic SPOT5-HRS images, is tested in this paper using time series with 9 multi-temporal satellite images. Our preliminary experiments show the good performances of our method. In a near future, the method will be applied to Pléiades images, acquired during the in-flight commissioning phase of the satellite (launched at the end of 2011). In that context, this is a particular goal of this paper to show to which extent and in which way our method can be adapted to this kind of imagery.
Image Segmentation for Connectomics Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tasdizen, Tolga; Seyedhosseini, Mojtaba; Liu, TIng
Reconstruction of neural circuits at the microscopic scale of individual neurons and synapses, also known as connectomics, is an important challenge for neuroscience. While an important motivation of connectomics is providing anatomical ground truth for neural circuit models, the ability to decipher neural wiring maps at the individual cell level is also important in studies of many neurodegenerative diseases. Reconstruction of a neural circuit at the individual neuron level requires the use of electron microscopy images due to their extremely high resolution. Computational challenges include pixel-by-pixel annotation of these images into classes such as cell membrane, mitochondria and synaptic vesiclesmore » and the segmentation of individual neurons. State-of-the-art image analysis solutions are still far from the accuracy and robustness of human vision and biologists are still limited to studying small neural circuits using mostly manual analysis. In this chapter, we describe our image analysis pipeline that makes use of novel supervised machine learning techniques to tackle this problem.« less
Spectral X-Ray Diffraction using a 6 Megapixel Photon Counting Array Detector.
Muir, Ryan D; Pogranichniy, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2015-03-12
Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.
Spectral x-ray diffraction using a 6 megapixel photon counting array detector
NASA Astrophysics Data System (ADS)
Muir, Ryan D.; Pogranichniy, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2015-03-01
Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
High-performance imaging of stem cells using single-photon emissions
NASA Astrophysics Data System (ADS)
Wagenaar, Douglas J.; Moats, Rex A.; Hartsough, Neal E.; Meier, Dirk; Hugg, James W.; Yang, Tang; Gazit, Dan; Pelled, Gadi; Patt, Bradley E.
2011-10-01
Radiolabeled cells have been imaged for decades in the field of autoradiography. Recent advances in detector and microelectronics technologies have enabled the new field of "digital autoradiography" which remains limited to ex vivo specimens of thin tissue slices. The 3D field-of-view (FOV) of single cell imaging can be extended to millimeters if the low energy (10-30 keV) photon emissions of radionuclides are used for single-photon nuclear imaging. This new microscope uses a coded aperture foil made of highly attenuating elements such as gold or platinum to form the image as a kind of "lens". The detectors used for single-photon emission microscopy are typically silicon detectors with a pixel pitch less than 60 μm. The goal of this work is to image radiolabeled mesenchymal stem cells in vivo in an animal model of tendon repair processes. Single-photon nuclear imaging is an attractive modality for translational medicine since the labeled cells can be imaged simultaneously with the reparative processes by using the dual-isotope imaging technique. The details our microscope's two-layer gold aperture and the operation of the energy-dispersive, pixellated silicon detector are presented along with the first demonstration of energy discrimination with a 57Co source. Cell labeling techniques have been augmented by genetic engineering with the sodium-iodide symporter, a type of reporter gene imaging method that enables in vivo uptake of free 99mTc or an iodine isotope at a time point days or weeks after the insertion of the genetically modified stem cells into the animal model. This microscopy work in animal research may expand to the imaging of reporter-enabled stem cells simultaneously with the expected biological repair process in human clinical trials of stem cell therapies.
Integrated Lens Antennas for Multi-Pixel Receivers
NASA Technical Reports Server (NTRS)
Lee, Choonsup; Chattopadhyay, Goutam
2011-01-01
Future astrophysics and planetary experiments are expected to require large focal plane arrays with thousands of detectors. Feedhorns have excellent performance, but their mass, size, fabrication challenges, and expense become prohibitive for very large focal plane arrays. Most planar antenna designs produce broad beam patterns, and therefore require additional elements for efficient coupling to the telescope optics, such as substrate lenses or micromachined horns. An antenna array with integrated silicon microlenses that can be fabricated photolithographically effectively addresses these issues. This approach eliminates manual assembly of arrays of lenses and reduces assembly errors and tolerances. Moreover, an antenna array without metallic horns will reduce mass of any planetary instrument significantly. The design has a monolithic array of lens-coupled, leaky-wave antennas operating in the millimeter- and submillimeter-wave frequencies. Electromagnetic simulations show that the electromagnetic fields in such lens-coupled antennas are mostly confined in approximately 12 15 . This means that one needs to design a small-angle sector lens that is much easier to fabricate using standard lithographic techniques, instead of a full hyper-hemispherical lens. Moreover, this small-angle sector lens can be easily integrated with the antennas in an array for multi-pixel imager and receiver implementation. The leaky antenna is designed using double-slot irises and fed with TE10 waveguide mode. The lens implementation starts with a silicon substrate. Photoresist with appropriate thickness (optimized for the lens size) is spun on the substrate and then reflowed to get the desired lens structure. An antenna array integrated with individual lenses for higher directivity and excellent beam profile will go a long way in realizing multi-pixel arrays and imagers. This technology will enable a new generation of compact, low-mass, and highly efficient antenna arrays for use in multi-pixel receivers and imagers for future planetary and astronomical instruments. These antenna arrays can also be used in radars and imagers for contraband detection at stand-off distances. This will be enabling technology for future balloon-borne, smaller explorer class mission (SMEX), and other missions, and for a wide range of proposed planetary sounders and radars for planetary bodies.
Direct imaging detectors for electron microscopy
NASA Astrophysics Data System (ADS)
Faruqi, A. R.; McMullan, G.
2018-01-01
Electronic detectors used for imaging in electron microscopy are reviewed in this paper. Much of the detector technology is based on the developments in microelectronics, which have allowed the design of direct detectors with fine pixels, fast readout and which are sufficiently radiation hard for practical use. Detectors included in this review are hybrid pixel detectors, monolithic active pixel sensors based on CMOS technology and pnCCDs, which share one important feature: they are all direct imaging detectors, relying on directly converting energy in a semiconductor. Traditional methods of recording images in the electron microscope such as film and CCDs, are mentioned briefly along with a more detailed description of direct electronic detectors. Many applications benefit from the use of direct electron detectors and a few examples are mentioned in the text. In recent years one of the most dramatic advances in structural biology has been in the deployment of the new backthinned CMOS direct detectors to attain near-atomic resolution molecular structures with electron cryo-microscopy (cryo-EM). The development of direct detectors, along with a number of other parallel advances, has seen a very significant amount of new information being recorded in the images, which was not previously possible-and this forms the main emphasis of the review.
Imaging natural materials with a quasi-microscope. [spectrophotometry of granular materials
NASA Technical Reports Server (NTRS)
Bragg, S.; Arvidson, R.
1977-01-01
A Viking lander camera with auxilliary optics mounted inside the dust post was evaluated to determine its capability for imaging the inorganic properties of granular materials. During mission operations, prepared samples would be delivered to a plate positioned within the camera's field of view and depth of focus. The auxiliary optics would then allow soil samples to be imaged with an 11 pm pixel size in the broad band (high resolution, black and white) mode, and a 33 pm pixel size in the multispectral mode. The equipment will be used to characterize: (1) the size distribution of grains produced by igneous (intrusive and extrusive) processes or by shock metamorphism, (2) the size distribution resulting from crushing, chemical alteration, or by hydraulic or aerodynamic sorting; (3) the shape and degree of grain roundness and surface texture induced by mechanical and chemical alteration; and (4) the mineralogy and chemistry of grains.
Curiosity's Mars Hand Lens Imager (MAHLI) Investigation
Edgett, Kenneth S.; Yingst, R. Aileen; Ravine, Michael A.; Caplinger, Michael A.; Maki, Justin N.; Ghaemi, F. Tony; Schaffner, Jacob A.; Bell, James F.; Edwards, Laurence J.; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sullivan, Robert J.; Sumner, Dawn Y.; Thomas, Peter C.; Jensen, Elsa H.; Simmonds, John J.; Sengstacken, Aaron J.; Wilson, Reg G.; Goetz, Walter
2012-01-01
The Mars Science Laboratory (MSL) Mars Hand Lens Imager (MAHLI) investigation will use a 2-megapixel color camera with a focusable macro lens aboard the rover, Curiosity, to investigate the stratigraphy and grain-scale texture, structure, mineralogy, and morphology of geologic materials in northwestern Gale crater. Of particular interest is the stratigraphic record of a ?5 km thick layered rock sequence exposed on the slopes of Aeolis Mons (also known as Mount Sharp). The instrument consists of three parts, a camera head mounted on the turret at the end of a robotic arm, an electronics and data storage assembly located inside the rover body, and a calibration target mounted on the robotic arm shoulder azimuth actuator housing. MAHLI can acquire in-focus images at working distances from ?2.1 cm to infinity. At the minimum working distance, image pixel scale is ?14 μm per pixel and very coarse silt grains can be resolved. At the working distance of the Mars Exploration Rover Microscopic Imager cameras aboard Spirit and Opportunity, MAHLI?s resolution is comparable at ?30 μm per pixel. Onboard capabilities include autofocus, auto-exposure, sub-framing, video imaging, Bayer pattern color interpolation, lossy and lossless compression, focus merging of up to 8 focus stack images, white light and longwave ultraviolet (365 nm) illumination of nearby subjects, and 8 gigabytes of non-volatile memory data storage.
Design of small confocal endo-microscopic probe working under multiwavelength environment
NASA Astrophysics Data System (ADS)
Kim, Young-Duk; Ahn, MyoungKi; Gweon, Dae-Gab
2010-02-01
Recently, optical imaging system is widely used in medical purpose. By using optical imaging system specific diseases can be easily diagnosed at early stage because optical imaging system has high resolution performance and various imaging method. These methods are used to get high resolution image of human body and can be used to verify whether the cell is infected by virus. Confocal microscope is one of the famous imaging systems which is used for in-vivo imaging. Because most of diseases are accompanied with cellular level changes, doctors can diagnosis at early stage by observing the cellular image of human organ. Current research is focused in the development of endo-microscope that has great advantage in accessibility to human body. In this research, I designed small probe that is connected to confocal microscope through optical fiber bundle and work as endo-microscope. And this small probe is mainly designed to correct chromatic aberration to use various laser sources for both fluorescence type and reflection type confocal images. By using two kinds of laser sources at the same time we demonstrated multi-modality confocal endo-microscope.
Multi-pixel high-resolution three-dimensional imaging radar
NASA Technical Reports Server (NTRS)
Cooper, Ken B. (Inventor); Dengler, Robert J. (Inventor); Siegel, Peter H. (Inventor); Chattopadhyay, Goutam (Inventor); Ward, John S. (Inventor); Juan, Nuria Llombart (Inventor); Bryllert, Tomas E. (Inventor); Mehdi, Imran (Inventor); Tarsala, Jan A. (Inventor)
2012-01-01
A three-dimensional imaging radar operating at high frequency e.g., 670 GHz radar using low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform, is disclosed that operates with a multiplexed beam to obtain range information simultaneously on multiple pixels of a target. A source transmit beam may be divided by a hybrid coupler into multiple transmit beams multiplexed together and directed to be reflected off a target and return as a single receive beam which is demultiplexed and processed to reveal range information of separate pixels of the target associated with each transmit beam simultaneously. The multiple transmit beams may be developed with appropriate optics to be temporally and spatially differentiated before being directed to the target. Temporal differentiation corresponds to a different intermediate frequencies separating the range information of the multiple pixels. Collinear transmit beams having differentiated polarizations may also be implemented.
NASA Astrophysics Data System (ADS)
Farda, N. M.; Danoedoro, P.; Hartono; Harjoko, A.
2016-11-01
The availably of remote sensing image data is numerous now, and with a large amount of data it makes “knowledge gap” in extraction of selected information, especially coastal wetlands. Coastal wetlands provide ecosystem services essential to people and the environment. The aim of this research is to extract coastal wetlands information from satellite data using pixel based and object based image mining approach. Landsat MSS, Landsat 5 TM, Landsat 7 ETM+, and Landsat 8 OLI images located in Segara Anakan lagoon are selected to represent data at various multi temporal images. The input for image mining are visible and near infrared bands, PCA band, invers PCA bands, mean shift segmentation bands, bare soil index, vegetation index, wetness index, elevation from SRTM and ASTER GDEM, and GLCM (Harralick) or variability texture. There is three methods were applied to extract coastal wetlands using image mining: pixel based - Decision Tree C4.5, pixel based - Back Propagation Neural Network, and object based - Mean Shift segmentation and Decision Tree C4.5. The results show that remote sensing image mining can be used to map coastal wetlands ecosystem. Decision Tree C4.5 can be mapped with highest accuracy (0.75 overall kappa). The availability of remote sensing image mining for mapping coastal wetlands is very important to provide better understanding about their spatiotemporal coastal wetlands dynamics distribution.
NASA Astrophysics Data System (ADS)
Heremans, Stien; Suykens, Johan A. K.; Van Orshoven, Jos
2016-02-01
To be physically interpretable, sub-pixel land cover fractions or abundances should fulfill two constraints, the Abundance Non-negativity Constraint (ANC) and the Abundance Sum-to-one Constraint (ASC). This paper focuses on the effect of imposing these constraints onto the MultiLayer Perceptron (MLP) for a multi-class sub-pixel land cover classification of a time series of low resolution MODIS-images covering the northern part of Belgium. Two constraining modes were compared, (i) an in-training approach that uses 'softmax' as the transfer function in the MLP's output layer and (ii) a post-training approach that linearly rescales the outputs of the unconstrained MLP. Our results demonstrate that the pixel-level prediction accuracy is markedly increased by the explicit enforcement, both in-training and post-training, of the ANC and the ASC. For aggregations of pixels (municipalities), the constrained perceptrons perform at least as well as their unconstrained counterparts. Although the difference in performance between the in-training and post-training approach is small, we recommend the former for integrating the fractional abundance constraints into MLPs meant for sub-pixel land cover estimation, regardless of the targeted level of spatial aggregation.
Geometric registration of remotely sensed data with SAMIR
NASA Astrophysics Data System (ADS)
Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto
2015-06-01
The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.
Differential polarization laser scanning microscopy: biological applications
NASA Astrophysics Data System (ADS)
Steinbach, G.; Besson, F.; Pomozi, I.; Garab, G.
2005-09-01
With the aid of a differential polarization (DP) apparatus, developed in our laboratory and attached to our laser scanning confocal microscope, we can measure the magnitude and spatial distribution of 8 different DP quantities: linear and circular dichroism (LD&CD), linear and circular anisotropy of the emission (R and CPL, confocal), fluorescence detected dichroisms (FDLD&FDCD, confocal), linear birefringence (LB), and the degree of polarization of fluorescence emission (P, confocal). The attachment uses high frequency modulation and subsequent demodulation, via lock-in amplifier, of the detected intensity values, and records and displays pixel-by-pixel the measured DP quantity. These microscopic DP data carry important physical information on the molecular architecture of anisotropically organized samples. Microscopic DP measurements are thought to be of particular importance in biology. In most biological samples anisotropy is difficult to determine with conventional, macroscopic DP measurements and microscopic variations are of special significance. In this paper, we describe the method of LB imaging. Using magnetically oriented isolated chloroplasts trapped in polyacrylamide gel, we demonstrate that LB can be determined with high sensitivity and good spatial resolution. Granal thylakoid membranes in edge-aligned orientation exhibited strong LB, with large variations in its sign and magnitude. In face-aligned position LB was considerably weaker, and tended to vanish when averaged for the whole image. The strong local variations are attributed to the inherent heterogeneity of the membranes, i.e. to their internal differentiation into multilamellar, stacked membranes (grana), and single thylakoids (stroma membranes). Further details and applications of our DP-LSM will be published elsewhere.
Medical image registration based on normalized multidimensional mutual information
NASA Astrophysics Data System (ADS)
Li, Qi; Ji, Hongbing; Tong, Ming
2009-10-01
Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.
Retinal oxygen saturation evaluation by multi-spectral fundus imaging
NASA Astrophysics Data System (ADS)
Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James
2007-03-01
Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.
Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion
NASA Astrophysics Data System (ADS)
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei
2017-02-01
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.
Steimers, A; Farnung, W; Kohl-Bareis, M
2016-01-01
We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.
Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image
NASA Astrophysics Data System (ADS)
Pirotti, F.; Sunar, F.; Piragnolo, M.
2016-06-01
Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random forests method with the highest values; kappa index ranging from 0.55 to 0.42 respectively with the most and least number pixels for training. The two neural networks (multi layered perceptron and its ensemble) and the support vector machines - with default radial basis function kernel - methods follow closely with comparable performance.
NASA Astrophysics Data System (ADS)
Angulo-Rodríguez, Leticia M.; Laurence, Audrey; Jermyn, Michael; Sheehy, Guillaume; Sibai, Mira; Petrecca, Kevin; Roberts, David W.; Paulsen, Keith D.; Wilson, Brian C.; Leblond, Frédéric
2016-03-01
Cancer tissue often remains after brain tumor resection due to the inability to detect the full extent of cancer during surgery, particularly near tumor boundaries. Commercial systems are available for intra-operative real-time aminolevulenic acid (ALA)-induced protoporphyrin IX (PpIX) fluorescence imaging. These are standard white-light neurosurgical microscopes adapted with optical components for fluorescence excitation and detection. However, these instruments lack sensitivity and specificity, which limits the ability to detect low levels of PpIX and distinguish it from tissue auto-fluorescence. Current systems also cannot provide repeatable and un-biased quantitative fluorophore concentration values because of the unknown and highly variable light attenuation by tissue. We present a highly sensitive spectroscopic fluorescence imaging system that is seamlessly integrated onto a neurosurgical microscope. Hardware and software were developed to achieve through-microscope spatially-modulated illumination for 3D profilometry and to use this information to extract tissue optical properties to correct for the effects of tissue light attenuation. This gives pixel-by-pixel quantified fluorescence values and improves detection of low PpIX concentrations. This is achieved using a high-sensitivity Electron Multiplying Charge Coupled Device (EMCCD) with a Liquid Crystal Tunable Filter (LCTF) whereby spectral bands are acquired sequentially; and a snapshot camera system with simultaneous acquisition of all bands is used for profilometry and optical property recovery. Sensitivity and specificity to PpIX is demonstrated using brain tissue phantoms and intraoperative human data acquired in an on-going clinical study using PpIX fluorescence to guide glioma resection.
Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida
2016-09-01
Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.
Integrated Photonic Neural Probes for Patterned Brain Stimulation
2017-08-14
two -photon imaging Task 3.2: In vivo demonstration of remote optical stimulation using photonic probes and multi -site electrical recording...have patterned nine e-pixels. We can individually address each e-pixel by tuning the color of the input light to the AWG. Figure (8) shows two ...Report: Integrated Photonic Neural Probes for Patterned Brain Stimulation The views , opinions and/or findings contained in this report are those of the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.
2005-09-01
Biologists and computer engineers at Pacific Northwest National Laboratory have specified, designed, and implemented a hardware/software system for performing real-time, multispectral image processing on a confocal microscope. This solution is intended to extend the capabilities of the microscope, enabling scientists to conduct advanced experiments on cell signaling and other kinds of protein interactions. FRET (fluorescence resonance energy transfer) techniques are used to locate and monitor protein activity. In FRET, it is critical that spectral images be precisely aligned with each other despite disturbances in the physical imaging path caused by imperfections in lenses and cameras, and expansion and contraction ofmore » materials due to temperature changes. The central importance of this work is therefore automatic image registration. This runs in a framework that guarantees real-time performance (processing pairs of 1024x1024, 8-bit images at 15 frames per second) and enables the addition of other types of advanced image processing algorithms such as image feature characterization. The supporting system architecture consists of a Visual Basic front-end containing a series of on-screen interfaces for controlling various aspects of the microscope and a script engine for automation. One of the controls is an ActiveX component written in C++ for handling the control and transfer of images. This component interfaces with a pair of LVDS image capture boards and a PCI board containing a 6-million gate Xilinx Virtex-II FPGA. Several types of image processing are performed on the FPGA in a pipelined fashion, including the image registration. The FPGA offloads work that would otherwise need to be performed by the main CPU and has a guaranteed real-time throughput. Image registration is performed in the FPGA by applying a cubic warp on one image to precisely align it with the other image. Before each experiment, an automated calibration procedure is run in order to set up the cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.« less
Electron imaging with an EBSD detector.
Wright, Stuart I; Nowell, Matthew M; de Kloe, René; Camus, Patrick; Rampton, Travis
2015-01-01
Electron Backscatter Diffraction (EBSD) has proven to be a useful tool for characterizing the crystallographic orientation aspects of microstructures at length scales ranging from tens of nanometers to millimeters in the scanning electron microscope (SEM). With the advent of high-speed digital cameras for EBSD use, it has become practical to use the EBSD detector as an imaging device similar to a backscatter (or forward-scatter) detector. Using the EBSD detector in this manner enables images exhibiting topographic, atomic density and orientation contrast to be obtained at rates similar to slow scanning in the conventional SEM manner. The high-speed acquisition is achieved through extreme binning of the camera-enough to result in a 5 × 5 pixel pattern. At such high binning, the captured patterns are not suitable for indexing. However, no indexing is required for using the detector as an imaging device. Rather, a 5 × 5 array of images is formed by essentially using each pixel in the 5 × 5 pixel pattern as an individual scattered electron detector. The images can also be formed at traditional EBSD scanning rates by recording the image data during a scan or can also be formed through post-processing of patterns recorded at each point in the scan. Such images lend themselves to correlative analysis of image data with the usual orientation data provided by and with chemical data obtained simultaneously via X-Ray Energy Dispersive Spectroscopy (XEDS). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer
NASA Technical Reports Server (NTRS)
Broderick, Daniel
2012-01-01
This software retrieves the surface and atmosphere parameters of multi-angle, multiband spectra. The synthetic spectra are generated by applying the modified Rahman-Pinty-Verstraete Bidirectional Reflectance Distribution Function (BRDF) model, and a single-scattering dominated atmosphere model to surface reflectance data from Multiangle Imaging SpectroRadiometer (MISR). The aerosol physical model uses a single scattering approximation using Rayleigh scattering molecules, and Henyey-Greenstein aerosols. The surface and atmosphere parameters of the models are retrieved using the Lavenberg-Marquardt algorithm. The software can retrieve the surface and atmosphere parameters with two different scales. The surface parameters are retrieved pixel-by-pixel while the atmosphere parameters are retrieved for a group of pixels where the same atmosphere model parameters are applied. This two-scale approach allows one to select the natural scale of the atmosphere properties relative to surface properties. The software also takes advantage of an intelligent initial condition given by the solution of the neighbor pixels.
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Scanning Miniature Microscopes without Lenses
NASA Technical Reports Server (NTRS)
Wang, Yu
2009-01-01
The figure schematically depicts some alternative designs of proposed compact, lightweight optoelectronic microscopes that would contain no lenses and would generate magnified video images of specimens. Microscopes of this type were described previously in Miniature Microscope Without Lenses (NPO - 20218), NASA Tech Briefs, Vol. 22, No. 8 (August 1998), page 43 and Reflective Variants of Miniature Microscope Without Lenses (NPO 20610), NASA Tech Briefs, Vol. 26, No. 9 (September 1999), page 6a. To recapitulate: In the design and construction of a microscope of this type, the focusing optics of a conventional microscope are replaced by a combination of a microchannel filter and a charge-coupled-device (CCD) image detector. Elimination of focusing optics reduces the size and weight of the instrument and eliminates the need for the time-consuming focusing operation. The microscopes described in the cited prior articles contained two-dimensional CCDs registered with two-dimensional arrays of microchannels and, as such, were designed to produce full two-dimensional images, without need for scanning. The microscopes of the present proposal would contain one-dimensional (line image) CCDs registered with linear arrays of microchannels. In the operation of such a microscope, one would scan a specimen along a line perpendicular to the array axis (in other words, one would scan in pushbroom fashion). One could then synthesize a full two-dimensional image of the specimen from the line-image data acquired at one-pixel increments of position along the scan. In one of the proposed microscopes, a beam of unpolarized light for illuminating the specimen would enter from the side. This light would be reflected down onto the specimen by a nonpolarizing beam splitter attached to the microchannels at their lower ends. A portion of the light incident on the specimen would be reflected upward, through the beam splitter and along the microchannels, to form an image on the CCD. If the nonpolarizing beam splitter were replaced by a polarizing one, then the specimen would be illuminated by s-polarized light. Upon reflection from the specimen, some of the s-polarized light would become p-polarized. Only the p-polarized light would contribute to the image on the CCD; in other words, the image would contain information on the polarization rotating characteristic of the specimen.
NASA Astrophysics Data System (ADS)
Sun, Yi; You, Sixian; Tu, Haohua; Spillman, Darold R.; Marjanovic, Marina; Chaney, Eric J.; Liu, George Z.; Ray, Partha S.; Higham, Anna; Boppart, Stephen A.
2017-02-01
Label-free multi-photon imaging has been a powerful tool for studying tissue microstructures and biochemical distributions, particularly for investigating tumors and their microenvironments. However, it remains challenging for traditional bench-top multi-photon microscope systems to conduct ex vivo tumor tissue imaging in the operating room due to their bulky setups and laser sources. In this study, we designed, built, and clinically demonstrated a portable multi-modal nonlinear label-free microscope system that combined four modalities, including two- and three- photon fluorescence for studying the distributions of FAD and NADH, and second and third harmonic generation, respectively, for collagen fiber structures and the distribution of micro-vesicles found in tumors and the microenvironment. Optical realignments and switching between modalities were motorized for more rapid and efficient imaging and for a light-tight enclosure, reducing ambient light noise to only 5% within the brightly lit operating room. Using up to 20 mW of laser power after a 20x objective, this system can acquire multi-modal sets of images over 600 μm × 600 μm at an acquisition rate of 60 seconds using galvo-mirror scanning. This portable microscope system was demonstrated in the operating room for imaging fresh, resected, unstained breast tissue specimens, and for assessing tumor margins and the tumor microenvironment. This real-time label-free nonlinear imaging system has the potential to uniquely characterize breast cancer margins and the microenvironment of tumors to intraoperatively identify structural, functional, and molecular changes that could indicate the aggressiveness of the tumor.
NASA Astrophysics Data System (ADS)
Berkels, Benjamin; Wirth, Benedikt
2017-09-01
Nowadays, modern electron microscopes deliver images at atomic scale. The precise atomic structure encodes information about material properties. Thus, an important ingredient in the image analysis is to locate the centers of the atoms shown in micrographs as precisely as possible. Here, we consider scanning transmission electron microscopy (STEM), which acquires data in a rastering pattern, pixel by pixel. Due to this rastering combined with the magnification to atomic scale, movements of the specimen even at the nanometer scale lead to random image distortions that make precise atom localization difficult. Given a series of STEM images, we derive a Bayesian method that jointly estimates the distortion in each image and reconstructs the underlying atomic grid of the material by fitting the atom bumps with suitable bump functions. The resulting highly non-convex minimization problems are solved numerically with a trust region approach. Existence of minimizers and the model behavior for faster and faster rastering are investigated using variational techniques. The performance of the method is finally evaluated on both synthetic and real experimental data.
Shi, Peng; Zhong, Jing; Hong, Jinsheng; Huang, Rongfang; Wang, Kaijun; Chen, Yunbin
2016-08-26
Nasopharyngeal carcinoma is one of the malignant neoplasm with high incidence in China and south-east Asia. Ki-67 protein is strictly associated with cell proliferation and malignant degree. Cells with higher Ki-67 expression are always sensitive to chemotherapy and radiotherapy, the assessment of which is beneficial to NPC treatment. It is still challenging to automatically analyze immunohistochemical Ki-67 staining nasopharyngeal carcinoma images due to the uneven color distributions in different cell types. In order to solve the problem, an automated image processing pipeline based on clustering of local correlation features is proposed in this paper. Unlike traditional morphology-based methods, our algorithm segments cells by classifying image pixels on the basis of local pixel correlations from particularly selected color spaces, then characterizes cells with a set of grading criteria for the reference of pathological analysis. Experimental results showed high accuracy and robustness in nucleus segmentation despite image data variance. Quantitative indicators obtained in this essay provide a reliable evidence for the analysis of Ki-67 staining nasopharyngeal carcinoma microscopic images, which would be helpful in relevant histopathological researches.
Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z
2014-01-01
Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Reduced signal crosstalk multi neurotransmitter image sensor by microhole array structure
NASA Astrophysics Data System (ADS)
Ogaeri, Yuta; Lee, You-Na; Mitsudome, Masato; Iwata, Tatsuya; Takahashi, Kazuhiro; Sawada, Kazuaki
2018-06-01
A microhole array structure combined with an enzyme immobilization method using magnetic beads can enhance the target discernment capability of a multi neurotransmitter image sensor. Here we report the fabrication and evaluation of the H+-diffusion-preventing capability of the sensor with the array structure. The structure with an SU-8 photoresist has holes with a size of 24.5 × 31.6 µm2. Sensors were prepared with the array structure of three different heights: 0, 15, and 60 µm. When the sensor has the structure of 60 µm height, 48% reduced output voltage is measured at a H+-sensitive null pixel that is located 75 µm from the acetylcholinesterase (AChE)-immobilized pixel, which is the starting point of H+ diffusion. The suppressed H+ immigration is shown in a two-dimensional (2D) image in real time. The sensor parameters, such as height of the array structure and measuring time, are optimized experimentally. The sensor is expected to effectively distinguish various neurotransmitters in biological samples.
NASA Astrophysics Data System (ADS)
Mandelis, Andreas; Zhang, Yu; Melnikov, Alexander
2012-09-01
A solar cell lock-in carrierographic image generation theory based on the concept of non-equilibrium radiation chemical potential was developed. An optoelectronic diode expression was derived linking the emitted radiative recombination photon flux (current density), the solar conversion efficiency, and the external load resistance via the closed- and/or open-circuit photovoltage. The expression was shown to be of a structure similar to the conventional electrical photovoltaic I-V equation, thereby allowing the carrierographic image to be used in a quantitative statistical pixel brightness distribution analysis with outcome being the non-contacting measurement of mean values of these important parameters averaged over the entire illuminated solar cell surface. This is the optoelectronic equivalent of the electrical (contacting) measurement method using an external resistor circuit and the outputs of the solar cell electrode grid, the latter acting as an averaging distribution network over the surface. The statistical theory was confirmed using multi-crystalline Si solar cells.
Focal Plane Detectors for the Advanced Gamma-Ray Imaging System (AGIS)
NASA Astrophysics Data System (ADS)
Wagner, Robert G.; AGIS Photodetector Group; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Horan, D.; Mukherjee, R.; Tajima, H.; Williams, D.
2008-03-01
The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. It is being designed to achieve a significant improvement in sensitivity compared to current Imaging Air Cherenkov Telescope (IACT) Arrays. One of the main requirements in order that AGIS fulfill this goal will be to achieve higher angular resolution than current IACTs. Simulations show that a substantial improvement in angular resolution may be achieved if the pixel size is reduced to less than 0.05 deg, i.e. two to three times smaller than the pixel size of current IACT cameras. With finer pixelation and the plan to deploy on the order of 100 telescopes in the AGIS array, the channel count will exceed 1,000,000 imaging pixels. High uniformity and long mean time-to-failure will be important aspects of a successful photodetector technology choice. Here we present alternatives being considered for AGIS, including both silicon photomultipliers (SiPMs) and multi-anode photomultipliers (MAPMTs). Results from laboratory testing of MAPMTs and SiPMs are presented along with results from the first incorporation of these devices in cameras on test bed Cherenkov telescopes.
Azimuthal phase retardation microscope for visualizing actin filaments of biological cells
NASA Astrophysics Data System (ADS)
Shin, In Hee; Shin, Sang-Mo
2011-09-01
We developed a new theory-based azimuthal phase retardation microscope to visualize distributions of actin filaments in biological cells without having them with exogenous dyes, fluorescence labels, or stains. The azimuthal phase retardation microscope visualizes distributions of actin filaments by measuring the intensity variations of each pixel of a charge coupled device camera while rotating a single linear polarizer. Azimuthal phase retardation δ between two fixed principal axes was obtained by calculating the rotation angles of the polarizer at the intensity minima from the acquired intensity data. We have acquired azimuthal phase retardation distributions of human breast cancer cell, MDA MB 231 by our microscope and compared the azimuthal phase retardation distributions with the fluorescence image of actin filaments by the commercial fluorescence microscope. Also, we have observed movement of human umbilical cord blood derived mesenchymal stem cells by measuring azimuthal phase retardation distributions.
A single pixel camera video ophthalmoscope
NASA Astrophysics Data System (ADS)
Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.
2017-02-01
There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.
NASA Astrophysics Data System (ADS)
Huynh, Toan; Daddysman, Matthew K.; Bao, Ying; Selewa, Alan; Kuznetsov, Andrey; Philipson, Louis H.; Scherer, Norbert F.
2017-05-01
Imaging specific regions of interest (ROIs) of nanomaterials or biological samples with different imaging modalities (e.g., light and electron microscopy) or at subsequent time points (e.g., before and after off-microscope procedures) requires relocating the ROIs. Unfortunately, relocation is typically difficult and very time consuming to achieve. Previously developed techniques involve the fabrication of arrays of features, the procedures for which are complex, and the added features can interfere with imaging the ROIs. We report the Fast and Accurate Relocation of Microscopic Experimental Regions (FARMER) method, which only requires determining the coordinates of 3 (or more) conspicuous reference points (REFs) and employs an algorithm based on geometric operators to relocate ROIs in subsequent imaging sessions. The 3 REFs can be quickly added to various regions of a sample using simple tools (e.g., permanent markers or conductive pens) and do not interfere with the ROIs. The coordinates of the REFs and the ROIs are obtained in the first imaging session (on a particular microscope platform) using an accurate and precise encoded motorized stage. In subsequent imaging sessions, the FARMER algorithm finds the new coordinates of the ROIs (on the same or different platforms), using the coordinates of the manually located REFs and the previously recorded coordinates. FARMER is convenient, fast (3-15 min/session, at least 10-fold faster than manual searches), accurate (4.4 μm average error on a microscope with a 100x objective), and precise (almost all errors are <8 μm), even with deliberate rotating and tilting of the sample well beyond normal repositioning accuracy. We demonstrate this versatility by imaging and re-imaging a diverse set of samples and imaging methods: live mammalian cells at different time points; fixed bacterial cells on two microscopes with different imaging modalities; and nanostructures on optical and electron microscopes. FARMER can be readily adapted to any imaging system with an encoded motorized stage and can facilitate multi-session and multi-platform imaging experiments in biology, materials science, photonics, and nanoscience.
Color sensitivity of the multi-exposure HDR imaging process
NASA Astrophysics Data System (ADS)
Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.
2013-04-01
Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
NASA Technical Reports Server (NTRS)
Malak, H.; Mahtani, H.; Herman, P.; Vecer, J.; Lu, X.; Chang, T. Y.; Richmond, Robert C.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
A high-performance hyperspectral imaging module with high throughput of light suitable for low-intensity fluorescence microscopic imaging and subsequent analysis, including single-pixel-defined emission spectroscopy, was tested on Sf21 insect cells expressing green fluorescence associated with recombinant green fluorescent protein linked or not with the membrane protein acyl-CoA:cholesterol acyltransferase. The imager utilized the phenomenon of optical activity as a new technique providing information over a spectral range of 220-1400 nm, and was inserted between the microscope and an 8-bit CCD video-rate camera. The resulting fluorescence image did not introduce observable image aberrations. The images provided parallel acquisition of well resolved concurrent spatial and spectral information such that fluorescence associated with green fluorescent protein alone was demonstrated to be diffuse within the Sf21 insect cell, and that green fluorescence associated with the membrane protein was shown to be specifically concentrated within regions of the cell cytoplasm. Emission spectra analyzed from different regions of the fluorescence image showed blue shift specific for the regions of concentration associated with the membrane protein.
NASA Astrophysics Data System (ADS)
Yang, Huijin; Pan, Bin; Wu, Wenfu; Tai, Jianhao
2018-07-01
Rice is one of the most important cereals in the world. With the change of agricultural land, it is urgently necessary to update information about rice planting areas. This study aims to map rice planting areas with a field-based approach through the integration of multi-temporal Sentinel-1A and Landsat-8 OLI data in Wuhua County of South China where has many basins and mountains. This paper, using multi-temporal SAR and optical images, proposes a methodology for the identification of rice-planting areas. This methodology mainly consists of SSM applied to time series SAR images for the calculation of a similarity measure, image segmentation process applied to the pan-sharpened optical image for the searching of homogenous objects, and the integration of SAR and optical data for the elimination of some speckles. The study compares the per-pixel approach with the per-field approach and the results show that the highest accuracy (91.38%) based on the field-based approach is 1.18% slightly higher than that based on the pixel-based approach for VH polarization, which is brought by eliminating speckle noise through comparing the rice maps of these two approaches. Therefore, the integration of Sentinel-1A and Landsat-8 OLI images with a field-based approach has great potential for mapping rice or other crops' areas.
Mapping tropical rainforest canopies using multi-temporal spaceborne imaging spectroscopy
NASA Astrophysics Data System (ADS)
Somers, Ben; Asner, Gregory P.
2013-10-01
The use of imaging spectroscopy for florisic mapping of forests is complicated by the spectral similarity among coexisting species. Here we evaluated an alternative spectral unmixing strategy combining a time series of EO-1 Hyperion images and an automated feature selection strategy in MESMA. Instead of using the same spectral subset to unmix each image pixel, our modified approach allowed the spectral subsets to vary on a per pixel basis such that each pixel is evaluated using a spectral subset tuned towards maximal separability of its specific endmember class combination or species mixture. The potential of the new approach for floristic mapping of tree species in Hawaiian rainforests was quantitatively demonstrated using both simulated and actual hyperspectral image time-series. With a Cohen's Kappa coefficient of 0.65, our approach provided a more accurate tree species map compared to MESMA (Kappa = 0.54). In addition, by the selection of spectral subsets our approach was about 90% faster than MESMA. The flexible or adaptive use of band sets in spectral unmixing as such provides an interesting avenue to address spectral similarities in complex vegetation canopies.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
DMD-based LED-illumination super-resolution and optical sectioning microscopy.
Dan, Dan; Lei, Ming; Yao, Baoli; Wang, Wen; Winterhalder, Martin; Zumbusch, Andreas; Qi, Yujiao; Xia, Liang; Yan, Shaohui; Yang, Yanlong; Gao, Peng; Ye, Tong; Zhao, Wei
2013-01-01
Super-resolution three-dimensional (3D) optical microscopy has incomparable advantages over other high-resolution microscopic technologies, such as electron microscopy and atomic force microscopy, in the study of biological molecules, pathways and events in live cells and tissues. We present a novel approach of structured illumination microscopy (SIM) by using a digital micromirror device (DMD) for fringe projection and a low-coherence LED light for illumination. The lateral resolution of 90 nm and the optical sectioning depth of 120 μm were achieved. The maximum acquisition speed for 3D imaging in the optical sectioning mode was 1.6×10(7) pixels/second, which was mainly limited by the sensitivity and speed of the CCD camera. In contrast to other SIM techniques, the DMD-based LED-illumination SIM is cost-effective, ease of multi-wavelength switchable and speckle-noise-free. The 2D super-resolution and 3D optical sectioning modalities can be easily switched and applied to either fluorescent or non-fluorescent specimens.
DMD-based LED-illumination Super-resolution and optical sectioning microscopy
Dan, Dan; Lei, Ming; Yao, Baoli; Wang, Wen; Winterhalder, Martin; Zumbusch, Andreas; Qi, Yujiao; Xia, Liang; Yan, Shaohui; Yang, Yanlong; Gao, Peng; Ye, Tong; Zhao, Wei
2013-01-01
Super-resolution three-dimensional (3D) optical microscopy has incomparable advantages over other high-resolution microscopic technologies, such as electron microscopy and atomic force microscopy, in the study of biological molecules, pathways and events in live cells and tissues. We present a novel approach of structured illumination microscopy (SIM) by using a digital micromirror device (DMD) for fringe projection and a low-coherence LED light for illumination. The lateral resolution of 90 nm and the optical sectioning depth of 120 μm were achieved. The maximum acquisition speed for 3D imaging in the optical sectioning mode was 1.6×107 pixels/second, which was mainly limited by the sensitivity and speed of the CCD camera. In contrast to other SIM techniques, the DMD-based LED-illumination SIM is cost-effective, ease of multi-wavelength switchable and speckle-noise-free. The 2D super-resolution and 3D optical sectioning modalities can be easily switched and applied to either fluorescent or non-fluorescent specimens. PMID:23346373
NASA Astrophysics Data System (ADS)
Lerotic, Mirna
Soft x-ray spectromicroscopy provides spectral data on the chemical speciation of light elements at sub-100 nanometer spatial resolution. The high resolution imaging places a strong demand on the microscope stability and on the reproducibility of the scanned image field, and the volume of data necessitates the need for improved data analysis methods. This dissertation concerns two developments in extending the capability of soft x-ray transmission microscopes to carry out studies of chemical speciation at high spatial resolution. One development involves an improvement in x-ray microscope instrumentation: a new Stony Brook scanning transmission x-ray microscope which incorporates laser interferometer feedback in scanning stage positions. The interferometer is used to control the position between the sample and focusing optics, and thus improve the stability of the system. A second development concerns new analysis methods for the study of chemical speciation of complex specimens, such as those in biological and environmental science studies. When all chemical species in a specimen are known and separately characterized, existing approaches can be used to measure the concentration of each component at each pixel. In other cases (such as often occur in biology or environmental science), where the specimen may be too complicated or provide at least some unknown spectral signatures, other approaches must be used. We describe here an approach that uses principal component analysis (similar to factor analysis) to orthogonalize and noise-filter spectromicroscopy data. We then use cluster analysis (a form of unsupervised pattern matching) to classify pixels according to spectral similarity, to extract representative, cluster-averaged spectra with good signal-to-noise ratio, and to obtain gradations of concentration of these representative spectra at each pixel. The method is illustrated with a simulated data set of organic compounds, and a mixture of lutetium in hematite used to understand colloidal transport properties of radionuclides. Also, we describe here an extension of that work employing an angle distance measure; this measure provides better classification based on spectral signatures alone in specimens with significant thickness variations. The method is illustrated using simulated data, and also to examine sporulation in the bacterium Clostridium sp.
NASA Astrophysics Data System (ADS)
Kainerstorfer, Jana M.; Amyot, Franck; Demos, Stavros G.; Hassan, Moinuddin; Chernomordik, Victor; Hitzenberger, Christoph K.; Gandjbakhche, Amir H.; Riley, Jason D.
2009-07-01
Quantitative assessment of skin chromophores in a non-invasive fashion is often desirable. Especially pixel wise assessment of blood volume and blood oxygenation is beneficial for improved diagnostics. We utilized a multi-spectral imaging system for acquiring diffuse reflectance images of healthy volunteers' lower forearm. Ischemia and reactive hyperemia was introduced by occluding the upper arm with a pressure cuff for 5min with 180mmHg. Multi-spectral images were taken every 30s, before, during and after occlusion. Image reconstruction for blood volume and blood oxygenation was performed, using a two layered skin model. As the images were taken in a non-contact way, strong artifacts related to the shape (curvature) of the arms were observed, making reconstruction of optical / physiological parameters highly inaccurate. We developed a curvature correction method, which is based on extracting the curvature directly from the intensity images acquired and does not require any additional measures on the object imaged. The effectiveness of the algorithm was demonstrated, on reconstruction results of blood volume and blood oxygenation for in vivo data during occlusion of the arm. Pixel wise assessment of blood volume and blood oxygenation was made possible over the entire image area and comparison of occlusion effects between veins and surrounding skin was performed. Induced ischemia during occlusion and reactive hyperemia afterwards was observed and quantitatively assessed. Furthermore, the influence of epidermal thickness on reconstruction results was evaluated and the exact knowledge of this parameter for fully quantitative assessment was pointed out.
Bi-cubic interpolation for shift-free pan-sharpening
NASA Astrophysics Data System (ADS)
Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano
2013-12-01
Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.
Imaging with a small number of photons
Morris, Peter A.; Aspden, Reuben S.; Bell, Jessica E. C.; Boyd, Robert W.; Padgett, Miles J.
2015-01-01
Low-light-level imaging techniques have application in many diverse fields, ranging from biological sciences to security. A high-quality digital camera based on a multi-megapixel array will typically record an image by collecting of order 105 photons per pixel, but by how much could this photon flux be reduced? In this work we demonstrate a single-photon imaging system based on a time-gated intensified camera from which the image of an object can be inferred from very few detected photons. We show that a ghost-imaging configuration, where the image is obtained from photons that have never interacted with the object, is a useful approach for obtaining images with high signal-to-noise ratios. The use of heralded single photons ensures that the background counts can be virtually eliminated from the recorded images. By applying principles of image compression and associated image reconstruction, we obtain high-quality images of objects from raw data formed from an average of fewer than one detected photon per image pixel. PMID:25557090
NASA Technical Reports Server (NTRS)
2004-01-01
This mosaic, featuring the rock target dubbed 'Bylot,' was acquired by NASA's Mars Exploration Rover Opportunity on sol 194 (Aug. 9, 2004). It consists of four images taken by the rover's microscopic imager. The spherules shown here are less round than the 'blueberries' seen previously in 'Endurance Crater,' perhaps because the minerals coating them are more resistant to erosion. Dark sand is partially covering the rock. The target was in complete shadow when the images were acquired, except for a small area at the upper right, where direct sunlight caused the camera to saturate and excess charge to 'bloom' downward into adjacent pixels.Optical properties of particles collected by COSIMA around 67P/Churyumov Gerasimenko
NASA Astrophysics Data System (ADS)
Langevin, Yves; Hilchenbach, Martin; Vincendon, Mathieu; Merouane, Sihane; Hornung, Klaus; Cosima Team
2017-04-01
The COSIMA TOF-SIMS spectrometer aboard Rosetta has collected nearly 40,000 particles in orbit around 67P/Churyumov-Gerasimenko from August 2014 to September 2016. These particles have been identified using the COSISCOPE optical microscope, which imaged the 10 mm x 10 mm targets before and after exposure to the cometary environment with a resolution of 14 µm / pixel [1
X ray imaging microscope for cancer research
NASA Technical Reports Server (NTRS)
Hoover, Richard B.; Shealy, David L.; Brinkley, B. R.; Baker, Phillip C.; Barbee, Troy W., Jr.; Walker, Arthur B. C., Jr.
1991-01-01
The NASA technology employed during the Stanford MSFC LLNL Rocket X Ray Spectroheliograph flight established that doubly reflecting, normal incidence multilayer optics can be designed, fabricated, and used for high resolution x ray imaging of the Sun. Technology developed as part of the MSFC X Ray Microscope program, showed that high quality, high resolution multilayer x ray imaging microscopes are feasible. Using technology developed at Stanford University and at the DOE Lawrence Livermore National Laboratory (LLNL), Troy W. Barbee, Jr. has fabricated multilayer coatings with near theoretical reflectivities and perfect bandpass matching for a new rocket borne solar observatory, the Multi-Spectral Solar Telescope Array (MSSTA). Advanced Flow Polishing has provided multilayer mirror substrates with sub-angstrom (rms) smoothnesss for the astronomical x ray telescopes and x ray microscopes. The combination of these important technological advancements has paved the way for the development of a Water Window Imaging X Ray Microscope for cancer research.
Quantitative Imaging with a Mobile Phone Microscope
Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.
2014-01-01
Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072
Scanning transmission x-ray microscope for materials science spectromicroscopy at the ALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warwick, T.; Seal, S.; Shin, H.
1997-04-01
The brightness of the Advanced Light Source will be exploited by several new instruments for materials science spectromicroscopy over the next year or so. The first of these to become operational is a scanning transmission x-ray microscope with which near edge x-ray absorption spectra (NEXAFS) can be measured on spatial features of sub-micron size. Here the authors describe the instrument as it is presently implemented, its capabilities, some studies made to date and the developments to come. The Scanning Transmission X-ray Microscope makes use of a zone plate lens to produce a small x-ray spot with which to perform absorptionmore » spectroscopy through thin samples. The x-ray beam from ALS undulator beamline 7.0 emerges into the microscope vessel through a silicon nitride vacuum window 160nm thick and 300{mu}m square. The vessel is filled with helium at atmospheric pressure. The zone plate lens is illuminated 1mm downstream from the vacuum window and forms an image in first order of a pinhole which is 3m upstream in the beamline. An order sorting aperture passes the first order converging light and blocks the unfocused zero order. The sample is at the focus a few mm downstream of the zone plate and mounted from a scanning piezo stage which rasters in x and y so that an image is formed, pixel by pixel, by an intensity detector behind the sample. Absorption spectra are measured point-by-point as the photon energy is scanned by rotating the diffraction grating in the monochromator and changing the undulator gap.« less
NASA Astrophysics Data System (ADS)
Descloux, A.; Grußmayer, K. S.; Bostan, E.; Lukes, T.; Bouwens, A.; Sharipov, A.; Geissbuehler, S.; Mahul-Mellier, A.-L.; Lashuel, H. A.; Leutenegger, M.; Lasser, T.
2018-03-01
Super-resolution fluorescence microscopy provides unprecedented insight into cellular and subcellular structures. However, going `beyond the diffraction barrier' comes at a price, since most far-field super-resolution imaging techniques trade temporal for spatial super-resolution. We propose the combination of a novel label-free white light quantitative phase imaging with fluorescence to provide high-speed imaging and spatial super-resolution. The non-iterative phase retrieval relies on the acquisition of single images at each z-location and thus enables straightforward 3D phase imaging using a classical microscope. We realized multi-plane imaging using a customized prism for the simultaneous acquisition of eight planes. This allowed us to not only image live cells in 3D at up to 200 Hz, but also to integrate fluorescence super-resolution optical fluctuation imaging within the same optical instrument. The 4D microscope platform unifies the sensitivity and high temporal resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy.
NASA Astrophysics Data System (ADS)
Hirano, Ryoichi; Iida, Susumu; Amano, Tsuyoshi; Watanabe, Hidehiro; Hatakeyama, Masahiro; Murakami, Takeshi; Yoshikawa, Shoji; Suematsu, Kenichi; Terao, Kenji
2015-07-01
High-sensitivity EUV mask pattern defect detection is one of the major issues in order to realize the device fabrication by using the EUV lithography. We have already designed a novel Projection Electron Microscope (PEM) optics that has been integrated into a new inspection system named EBEYE-V30 ("Model EBEYE" is an EBARA's model code), and which seems to be quite promising for 16 nm hp generation EUVL Patterned mask Inspection (PI). Defect inspection sensitivity was evaluated by capturing an electron image generated at the mask by focusing onto an image sensor. The progress of the novel PEM optics performance is not only about making an image sensor with higher resolution but also about doing a better image processing to enhance the defect signal. In this paper, we describe the experimental results of EUV patterned mask inspection using the above-mentioned system. The performance of the system is measured in terms of defect detectability for 11 nm hp generation EUV mask. To improve the inspection throughput for 11 nm hp generation defect detection, it would require a data processing rate of greater than 1.5 Giga- Pixel-Per-Second (GPPS) that would realize less than eight hours of inspection time including the step-and-scan motion associated with the process. The aims of the development program are to attain a higher throughput, and enhance the defect detection sensitivity by using an adequate pixel size with sophisticated image processing resulting in a higher processing rate.
Contrast computation methods for interferometric measurement of sensor modulation transfer function
NASA Astrophysics Data System (ADS)
Battula, Tharun; Georgiev, Todor; Gille, Jennifer; Goma, Sergio
2018-01-01
Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.
Multi-compartment microscopic diffusion imaging
Kaden, Enrico; Kelm, Nathaniel D.; Carson, Robert P.; Does, Mark D.; Alexander, Daniel C.
2017-01-01
This paper introduces a multi-compartment model for microscopic diffusion anisotropy imaging. The aim is to estimate microscopic features specific to the intra- and extra-neurite compartments in nervous tissue unconfounded by the effects of fibre crossings and orientation dispersion, which are ubiquitous in the brain. The proposed MRI method is based on the Spherical Mean Technique (SMT), which factors out the neurite orientation distribution and thus provides direct estimates of the microscopic tissue structure. This technique can be immediately used in the clinic for the assessment of various neurological conditions, as it requires only a widely available off-the-shelf sequence with two b-shells and high-angular gradient resolution achievable within clinically feasible scan times. To demonstrate the developed method, we use high-quality diffusion data acquired with a bespoke scanner system from the Human Connectome Project. This study establishes the normative values of the new biomarkers for a large cohort of healthy young adults, which may then support clinical diagnostics in patients. Moreover, we show that the microscopic diffusion indices offer direct sensitivity to pathological tissue alterations, exemplified in a preclinical animal model of Tuberous Sclerosis Complex (TSC), a genetic multi-organ disorder which impacts brain microstructure and hence may lead to neurological manifestations such as autism, epilepsy and developmental delay. PMID:27282476
NASA Astrophysics Data System (ADS)
Xie, Bing; Duan, Zhemin; Chen, Yu
2017-11-01
The mode of navigation based on scene match can assist UAV to achieve autonomous navigation and other missions. However, aerial multi-frame images of the UAV in the complex flight environment easily be affected by the jitter, noise and exposure, which will lead to image blur, deformation and other issues, and result in the decline of detection rate of the interested regional target. Aiming at this problem, we proposed a kind of Graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation. Experimental results prove the validity and accuracy of the proposed algorithm.
Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li
2015-07-01
Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.
Toward real-time quantum imaging with a single pixel camera
Lawrie, B. J.; Pooser, R. C.
2013-03-19
In this paper, we present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively pass macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. Finally, in low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imagingmore » with sensitivity below the photon shot noise limit.« less
Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique
NASA Technical Reports Server (NTRS)
Wargo, M. J.; Witt, A. F.
1992-01-01
A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juffmann, Thomas; Koppell, Stewart A.; Klopfer, Brannon B.
Feynman once asked physicists to build better electron microscopes to be able to watch biology at work. While electron microscopes can now provide atomic resolution, electron beam induced specimen damage precludes high resolution imaging of sensitive materials, such as single proteins or polymers. Here, we use simulations to show that an electron microscope based on a multi-pass measurement protocol enables imaging of single proteins, without averaging structures over multiple images. While we demonstrate the method for particular imaging targets, the approach is broadly applicable and is expected to improve resolution and sensitivity for a range of electron microscopy imaging modalities,more » including, for example, scanning and spectroscopic techniques. The approach implements a quantum mechanically optimal strategy which under idealized conditions can be considered interaction-free.« less
Wavefront correction in two-photon microscopy with a multi-actuator adaptive lens.
Bueno, Juan M; Skorsetz, Martin; Bonora, Stefano; Artal, Pablo
2018-05-28
A multi-actuator adaptive lens (AL) was incorporated into a multi-photon (MP) microscope to improve the quality of images of thick samples. Through a hill-climbing procedure the AL corrected for the specimen-induced aberrations enhancing MP images. The final images hardly differed when two different metrics were used, although the sets of Zernike coefficients were not identical. The optimized MP images acquired with the AL were also compared with those obtained with a liquid-crystal-on-silicon spatial light modulator. Results have shown that both devices lead to similar images, which corroborates the usefulness of this AL for MP imaging.
A coarse-to-fine approach for medical hyperspectral image classification with sparse representation
NASA Astrophysics Data System (ADS)
Chang, Lan; Zhang, Mengmeng; Li, Wei
2017-10-01
A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.
Portable microscopy platform for the clinical and environmental monitoring
NASA Astrophysics Data System (ADS)
Wang, Weiming; Yu, Yan; Huang, Hui; Ou, Jinping
2016-04-01
Light microscopy can not only address various diagnosis needs such as aquatic parasites and bacteria such as E. coli in water, but also provide a method for the screening of red tide. Traditional microscope based on the smartphone created by adding lens couldn't keep the tradeoff between field-of-view(FOV) and the resolution. In this paper, we demonstrate a non-contact, light and cost-effective microscope platform, that can image highly dense samples with a spatial resolution of ~0.8um over a field-of-view(FOV) of >1mm2. After captured the direct images, we performed the pixel super-resolution algorithm to improve the image resolution and overcome the hardware interference. The system would be a good point-of-care diagnostic solution in resource limited settings. We validated the performance of the system by imaging resolution test targets, the squamous cell cancer(SqCC) and green algae that necessary to detect the squamous carcinoma and red tide
Light-sheet microscopy by confocal line scanning of dual-Bessel beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Pengfei; Phipps, Mary Elizabeth; Goodwin, Peter Marvin
Here, we have developed a light-sheet microscope that uses confocal scanning of dual-Bessel beams for illumination. A digital micromirror device (DMD) is placed in the intermediate image plane of the objective used to collect fluorescence and is programmed with two lines of pixels in the “on” state such that the DMD functions as a spatial filter to reject the out-of-focus background generated by the side-lobes of the Bessel beams. The optical sectioning and out-of-focus background rejection capabilities of this microscope were demonstrated by imaging of fluorescently stained actin in human A431 cells. The dual-Bessel beam system enables twice as manymore » photons to be detected per imaging scan, which is useful for low light applications (e.g., single-molecule localization) or imaging at high speed with a superior signal to noise. While demonstrated for two Bessel beams, this approach is scalable to a larger number of beams.« less
Light-sheet microscopy by confocal line scanning of dual-Bessel beams
Zhang, Pengfei; Phipps, Mary Elizabeth; Goodwin, Peter Marvin; ...
2016-10-25
Here, we have developed a light-sheet microscope that uses confocal scanning of dual-Bessel beams for illumination. A digital micromirror device (DMD) is placed in the intermediate image plane of the objective used to collect fluorescence and is programmed with two lines of pixels in the “on” state such that the DMD functions as a spatial filter to reject the out-of-focus background generated by the side-lobes of the Bessel beams. The optical sectioning and out-of-focus background rejection capabilities of this microscope were demonstrated by imaging of fluorescently stained actin in human A431 cells. The dual-Bessel beam system enables twice as manymore » photons to be detected per imaging scan, which is useful for low light applications (e.g., single-molecule localization) or imaging at high speed with a superior signal to noise. While demonstrated for two Bessel beams, this approach is scalable to a larger number of beams.« less
NASA Astrophysics Data System (ADS)
Wang, Fu-Bin; Tu, Paul; Wu, Chen; Chen, Lei; Feng, Ding
2018-01-01
In femtosecond laser processing, the field of view of each image frame of the microscale structure is extremely small. In order to obtain the morphology of the whole microstructure, a multi-image mosaic with partially overlapped regions is required. In the present work, the SIFT algorithm for mosaic images was analyzed theoretically, and by using multiple images of a microgroove structure processed by femtosecond laser, a stitched image of the whole groove structure could be studied experimentally and realized. The object of our research concerned a silicon wafer with a microgroove structure ablated by femtosecond laser. First, we obtained microgrooves at a width of 380 μm at different depths. Second, based on the gray image of the microgroove, a multi-image mosaic with slot width and slot depth was realized. In order to improve the image contrast between the target and the background, and taking the slot depth image as an example, a multi-image mosaic was then realized using pseudo color enhancement. Third, in order to measure the structural size of the microgroove with the image, a known width streak ablated by femtosecond laser at 20 mW was used as a calibration sample. Through edge detection, corner extraction, and image correction for the streak images, we calculated the pixel width of the streak image and found the measurement ratio constant Kw in the width direction, and then obtained the proportional relationship between a pixel and a micrometer. Finally, circular spot marks ablated by femtosecond laser at 2 mW and 15 mW were used as test images, and proving that the value Kw was correct, the measurement ratio constant Kh in the height direction was obtained, and the image measurements for a microgroove of 380 × 117 μm was realized based on a measurement ratio constant Kw and Kh. The research and experimental results show that the image mosaic, image calibration, and geometric image parameter measurements for the microstructural image ablated by femtosecond laser were realized effectively.
Differential high-speed digital micromirror device based fluorescence speckle confocal microscopy.
Jiang, Shihong; Walker, John
2010-01-20
We report a differential fluorescence speckle confocal microscope that acquires an image in a fraction of a second by exploiting the very high frame rate of modern digital micromirror devices (DMDs). The DMD projects a sequence of predefined binary speckle patterns to the sample and modulates the intensity of the returning fluorescent light simultaneously. The fluorescent light reflecting from the DMD's "on" and "off" pixels is modulated by correlated speckle and anticorrelated speckle, respectively, to form two images on two CCD cameras in parallel. The sum of the two images recovers a widefield image, but their difference gives a near-confocal image in real time. Experimental results for both low and high numerical apertures are shown.
NASA Astrophysics Data System (ADS)
Leroi, Vaitua; Bibring, Jean-Pierre; Berthe, Michel
2009-07-01
MicrOmega is an ultra miniaturized spectral microscope for in situ analysis of samples. It is composed of 2 microscopes; one with a spatial sampling less or equal to 4 μm, working in 4 colors in the visible range: MicrOmega/VIS, and a NIR hyperspectral microscope working in the spectral range 0.9-4 μm with a spatial sampling of 20 μm per pixel: MicrOmega/IR (described in this paper). MicrOmega/IR illuminates and images samples a few mm in size and acquires the NIR spectrum of each resolved pixel in up to 320 contiguous spectral channels. The goal of this instrument is to analyze in situ the composition of collected samples at almost their grain size scale, in a non-destructive way. With the chosen spectral range and resolution, a wide variety of constituents can be identified: minerals, such as pyroxene and olivine, ferric oxides, hydrated phyllosilicates, sulfates and carbonates and ices and organics. The composition of the various phases within a given sample is a critical record of its formation and evolution. Coupled to the mapping information, it provides unique clues to describe the history of the parent body (planet, satellite and small body). In particular, the capability to identify hydrated grains and to characterize their adjacent phases has a huge potential in the search for possible bio-relics.
Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted
2012-12-01
We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.
Optical Scatter Imaging with a digital micromirror device.
Zheng, Jing-Yi; Pasternack, Robert M; Boustany, Nada N
2009-10-26
We had developed Optical Scatter Imaging (OSI) as a method which combines light scattering spectroscopy with microscopic imaging to probe local particle size in situ. Using a variable diameter iris as a Fourier spatial filter, the technique consisted of collecting images that encoded the intensity ratio of wide-to-narrow angle scatter at each pixel in the full field of view. In this paper, we replace the variable diameter Fourier filter with a digital micromirror device (DMD) to extend our assessment of morphology to the characterization of particle shape and orientation. We describe our setup in detail and demonstrate how to eliminate aberrations associated with the placement of the DMD in a conjugate Fourier plane of our microscopic imaging system. Using bacteria and polystyrene spheres, we show how this system can be used to assess particle aspect ratio even when imaged at low resolution. We also show the feasibility of detecting alterations in organelle aspect ratio in situ within living cells. This improved OSI system could be further developed to automate morphological quantification and sorting of non-spherical particles in situ.
Influence of imaging resolution on color fidelity in digital archiving.
Zhang, Pengchang; Toque, Jay Arre; Ide-Ektessabi, Ari
2015-11-01
Color fidelity is of paramount importance in digital archiving. In this paper, the relationship between color fidelity and imaging resolution was explored by calculating the color difference of an IT8.7/2 color chart with a CIELAB color difference formula for scanning and simulation images. Microscopic spatial sampling was used in selecting the image pixels for the calculations to highlight the loss of color information. A ratio, called the relative imaging definition (RID), was defined to express the correlation between image resolution and color fidelity. The results show that in order for color differences to remain unrecognizable, the imaging resolution should be at least 10 times higher than the physical dimension of the smallest feature in the object being studied.
NASA Astrophysics Data System (ADS)
Hayashi, Shinichi; Takimoto, Shinichi; Hashimoto, Takeshi
2007-02-01
Coherent anti-Stokes Raman scattering (CARS) microscopy, which can produce images of specific molecules without staining, has attracted the attention of researchers, as it matches the need for molecular imaging and pathway analysis of live cells. In particular, there have been an increasing number of CARS experimental results regarding lipids in live cells, which cannot be fluorescently tagged while keeping the cells alive. One of the important applications of lipid research is for the metabolic syndrome. Since the metabolic syndrome is said to be related to the lipids in lipocytes, blood, arterial vessels, and so on, the CARS technique is expected to find application in this field. However, CARS microscopy requires a pair of picosecond laser pulses, which overlap both temporally and spatially. This makes the optical adjustments of a CARS microscope challenging. The authors developed a CARS unit that includes optics for easy and stable adjustment of the overlap of these laser pulses. Adding the CARS unit to a laser scanning microscope provides CARS images of a high signal-to-noise ratio, with an acquisition rate as high as 2 microseconds per pixel. Thus, images of fast-moving lipid droplets in Hela cells were obtained.
Doi, Ryoichi; Arif, Chusnul
2014-01-01
Red-green-blue (RGB) channels of RGB digital photographs were loaded with luminosity-adjusted R, G, and completely white grayscale images, respectively (RGwhtB method), or R, G, and R + G (RGB yellow) grayscale images, respectively (RGrgbyB method), to adjust the brightness of the entire area of multi-temporally acquired color digital photographs of a rice canopy. From the RGwhtB or RGrgbyB pseudocolor image, cyan, magenta, CMYK yellow, black, L*, a*, and b* grayscale images were prepared. Using these grayscale images and R, G, and RGB yellow grayscale images, the luminosity-adjusted pixels of the canopy photographs were statistically clustered. With the RGrgbyB and the RGwhtB methods, seven and five major color clusters were given, respectively. The RGrgbyB method showed clear differences among three rice growth stages, and the vegetative stage was further divided into two substages. The RGwhtB method could not clearly discriminate between the second vegetative and midseason stages. The relative advantages of the RGrgbyB method were attributed to the R, G, B, magenta, yellow, L*, and a* grayscale images that contained richer information to show the colorimetrical differences among objects than those of the RGwhtB method. The comparison of rice canopy colors at different time points was enabled by the pseudocolor imaging method. PMID:25302325
Very-large-area CCD image sensors: concept and cost-effective research
NASA Astrophysics Data System (ADS)
Bogaart, E. W.; Peters, I. M.; Kleimann, A. C.; Manoury, E. J. P.; Klaassens, W.; de Laat, W. T. F. M.; Draijer, C.; Frost, R.; Bosiers, J. T.
2009-01-01
A new-generation full-frame 36x48 mm2 48Mp CCD image sensor with vertical anti-blooming for professional digital still camera applications is developed by means of the so-called building block concept. The 48Mp devices are formed by stitching 1kx1k building blocks with 6.0 µm pixel pitch in 6x8 (hxv) format. This concept allows us to design four large-area (48Mp) and sixty-two basic (1Mp) devices per 6" wafer. The basic image sensor is relatively small in order to obtain data from many devices. Evaluation of the basic parameters such as the image pixel and on-chip amplifier provides us statistical data using a limited number of wafers. Whereas the large-area devices are evaluated for aspects typical to large-sensor operation and performance, such as the charge transport efficiency. Combined with the usability of multi-layer reticles, the sensor development is cost effective for prototyping. Optimisation of the sensor design and technology has resulted in a pixel charge capacity of 58 ke- and significantly reduced readout noise (12 electrons at 25 MHz pixel rate, after CDS). Hence, a dynamic range of 73 dB is obtained. Microlens and stack optimisation resulted in an excellent angular response that meets with the wide-angle photography demands.
Wu, L C; D'Amelio, F; Fox, R A; Polyakov, I; Daunton, N G
1997-06-06
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
NASA Technical Reports Server (NTRS)
Wu, L. C.; D'Amelio, F.; Fox, R. A.; Polyakov, I.; Daunton, N. G.
1997-01-01
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
A multi-scale segmentation approach to filling gaps in Landsat ETM+ SLC-off images
Maxwell, S.K.; Schmidt, Gail L.; Storey, James C.
2007-01-01
On 31 May 2003, the Landsat Enhanced Thematic Plus (ETM+) Scan Line Corrector (SLC) failed, causing the scanning pattern to exhibit wedge-shaped scan-to-scan gaps. We developed a method that uses coincident spectral data to fill the image gaps. This method uses a multi-scale segment model, derived from a previous Landsat SLC-on image (image acquired prior to the SLC failure), to guide the spectral interpolation across the gaps in SLC-off images (images acquired after the SLC failure). This paper describes the process used to generate the segment model, provides details of the gap-fill algorithm used in deriving the segment-based gap-fill product, and presents the results of the gap-fill process applied to grassland, cropland, and forest landscapes. Our results indicate this product will be useful for a wide variety of applications, including regional-scale studies, general land cover mapping (e.g. forest, urban, and grass), crop-specific mapping and monitoring, and visual assessments. Applications that need to be cautious when using pixels in the gap areas include any applications that require per-pixel accuracy, such as urban characterization or impervious surface mapping, applications that use texture to characterize landscape features, and applications that require accurate measurements of small or narrow landscape features such as roads, farmsteads, and riparian areas.
Tumor segmentation of multi-echo MR T2-weighted images with morphological operators
NASA Astrophysics Data System (ADS)
Torres, W.; Martín-Landrove, M.; Paluszny, M.; Figueroa, G.; Padilla, G.
2009-02-01
In the present work an automatic brain tumor segmentation procedure based on mathematical morphology is proposed. The approach considers sequences of eight multi-echo MR T2-weighted images. The relaxation time T2 characterizes the relaxation of water protons in the brain tissue: white matter, gray matter, cerebrospinal fluid (CSF) or pathological tissue. Image data is initially regularized by the application of a log-convex filter in order to adjust its geometrical properties to those of noiseless data, which exhibits monotonously decreasing convex behavior. Finally the regularized data is analyzed by means of an 8-dimensional morphological eccentricity filter. In a first stage, the filter was used for the spatial homogenization of the tissues in the image, replacing each pixel by the most representative pixel within its structuring element, i.e. the one which exhibits the minimum total distance to all members in the structuring element. On the filtered images, the relaxation time T2 is estimated by means of least square regression algorithm and the histogram of T2 is determined. The T2 histogram was partitioned using the watershed morphological operator; relaxation time classes were established and used for tissue classification and segmentation of the image. The method was validated on 15 sets of MRI data with excellent results.
Illumination Invariant Change Detection (iicd): from Earth to Mars
NASA Astrophysics Data System (ADS)
Wan, X.; Liu, J.; Qin, M.; Li, S. Y.
2018-04-01
Multi-temporal Earth Observation and Mars orbital imagery data with frequent repeat coverage provide great capability for planetary surface change detection. When comparing two images taken at different times of day or in different seasons for change detection, the variation of topographic shades and shadows caused by the change of sunlight angle can be so significant that it overwhelms the real object and environmental changes, making automatic detection unreliable. An effective change detection algorithm therefore has to be robust to the illumination variation. This paper presents our research on developing and testing an Illumination Invariant Change Detection (IICD) method based on the robustness of phase correlation (PC) to the variation of solar illumination for image matching. The IICD is based on two key functions: i) initial change detection based on a saliency map derived from pixel-wise dense PC matching and ii) change quantization which combines change type identification, motion estimation and precise appearance change identification. Experiment using multi-temporal Landsat 7 ETM+ satellite images, Rapid eye satellite images and Mars HiRiSE images demonstrate that our frequency based image matching method can reach sub-pixel accuracy and thus the proposed IICD method can effectively detect and precisely segment large scale change such as landslide as well as small object change such as Mars rover, under daily and seasonal sunlight changes.
Open-top selective plane illumination microscope for conventionally mounted specimens.
McGorty, Ryan; Liu, Harrison; Kamiyama, Daichi; Dong, Zhiqiang; Guo, Su; Huang, Bo
2015-06-15
We have developed a new open-top selective plane illumination microscope (SPIM) compatible with microfluidic devices, multi-well plates, and other sample formats used in conventional inverted microscopy. Its key element is a water prism that compensates for the aberrations introduced when imaging at 45 degrees through a coverglass. We have demonstrated its unique high-content imaging capability by recording Drosophila embryo development in environmentally-controlled microfluidic channels and imaging zebrafish embryos in 96-well plates. We have also shown the imaging of C. elegans and moving Drosophila larvae on coverslips.
Intelligent identification of remnant ridge edges in region west of Yongxing Island, South China Sea
NASA Astrophysics Data System (ADS)
Wang, Weiwei; Guo, Jing; Cai, Guanqiang; Wang, Dawei
2018-02-01
Edge detection enables identification of geomorphologic unit boundaries and thus assists with geomorphical mapping. In this paper, an intelligent edge identification method is proposed and image processing techniques are applied to multi-beam bathymetry data. To accomplish this, a color image is generated by the bathymetry, and a weighted method is used to convert the color image to a gray image. As the quality of the image has a significant influence on edge detection, different filter methods are applied to the gray image for de-noising. The peak signal-to-noise ratio and mean square error are calculated to evaluate which filter method is most appropriate for depth image filtering and the edge is subsequently detected using an image binarization method. Traditional image binarization methods cannot manage the complicated uneven seafloor, and therefore a binarization method is proposed that is based on the difference between image pixel values; the appropriate threshold for image binarization is estimated according to the probability distribution of pixel value differences between two adjacent pixels in horizontal and vertical directions, respectively. Finally, an eight-neighborhood frame is adopted to thin the binary image, connect the intermittent edge, and implement contour extraction. Experimental results show that the method described here can recognize the main boundaries of geomorphologic units. In addition, the proposed automatic edge identification method avoids use of subjective judgment, and reduces time and labor costs.
Improving multiphoton STED nanoscopy with separation of photons by LIfetime Tuning (SPLIT)
NASA Astrophysics Data System (ADS)
Coto Hernández, Iván.; Lanzano, Luca; Castello, Marco; Jowett, Nate; Tortarolo, Giorgio; Diaspro, Alberto; Vicidomini, Giuseppe
2018-02-01
Stimulated emission depletion (STED) microscopy is a powerful bio-imaging technique since it provides molecular spatial resolution whilst preserving the most important assets of fluorescence microscopy. When combined with twophoton excitation (2PE) microscopy (2PE-STED), the sub-diffraction imaging ability of STED microscopy can be achieved also on thick biological samples. The most straightforward implementation of 2PE-STED microscopy is obtained by introducing a STED beam operating in continuous wave (CW) into a conventional Ti:Sapphire based 2PE microscope (2PE-CW-STED). In this implementation, an effective resolution enhancement is mainly obtained implementing a time-gated detection scheme, which however can drastically reduce the signal-to-noise/background ratio of the final image. Herein, we combine the lifetime tuning (SPLIT) approach with 2PE-CW-STED to overcome this limitation. The SPLIT approach is employed to discard fluorescence photons lacking super-resolution information, by means of a pixel-by-pixel phasor approach. Combining the SPLIT approach with image deconvolution further optimizes the signal-to-noise/background ratio.
A Simple Encryption Algorithm for Quantum Color Image
NASA Astrophysics Data System (ADS)
Li, Panchi; Zhao, Ya
2017-06-01
In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.
Multi-pass encoding of hyperspectral imagery with spectral quality control
NASA Astrophysics Data System (ADS)
Wasson, Steven; Walker, William
2015-05-01
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
Calibrations for a MCAO Imaging System
NASA Astrophysics Data System (ADS)
Hibon, Pascale; B. Neichel; V. Garrel; R. Carrasco
2017-09-01
"GeMS, the Gemini Multi conjugate adaptive optics System installed at the Gemini South telescope (Cerro Pachon, Chile) started to deliver science since the beginning of 2013. GeMS is using the Multi Conjugate AdaptiveOptics (MCAO) technique allowing to dramatically increase the corrected field of view (FOV) compared to classical Single Conjugated Adaptive Optics (SCAO) systems. It is the first sodium-based multi-Laser Guide Star (LGS) adaptive optics system. It has been designed to feed two science instruments: GSAOI, a 4k×4k NIR imager covering 85"×85" with 0.02" pixel scale, and Flamingos-2, a NIR multi-object spectrograph. We present here an overview of the calibrations necessary for reducing and analysing the science datasets obtained with GeMS+GSAOI."
Improved optical flow motion estimation for digital image stabilization
NASA Astrophysics Data System (ADS)
Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao
2015-11-01
Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2012-01-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2011-12-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Going Deeper With Contextual CNN for Hyperspectral Image Classification.
Lee, Hyungtae; Kwon, Heesung
2017-10-01
In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.
A Multispectral Micro-Imager for Lunar Field Geology
NASA Technical Reports Server (NTRS)
Nunez, Jorge; Farmer, Jack; Sellar, Glenn; Allen, Carlton
2009-01-01
Field geologists routinely assign rocks to one of three basic petrogenetic categories (igneous, sedimentary or metamorphic) based on microtextural and mineralogical information acquired with a simple magnifying lens. Indeed, such observations often comprise the core of interpretations of geological processes and history. The Multispectral Microscopic Imager (MMI) uses multi-wavelength, light-emitting diodes (LEDs) and a substrate-removed InGaAs focal-plane array to create multispectral, microscale reflectance images of geological samples (FOV 32 X 40 mm). Each pixel (62.5 microns) of an image is comprised of 21 spectral bands that extend from 470 to 1750 nm, enabling the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases. MMI images provide crucial context information for in situ robotic analyses using other onboard analytical instruments (e.g. XRD), or for the selection of return samples for analysis in terrestrial labs. To further assess the value of the MMI as a tool for lunar exploration, we used a field-portable, tripod-mounted version of the MMI to image a variety of Apollo samples housed at the Lunar Experiment Laboratory, NASA s Johnson Space Center. MMI images faithfully resolved the microtextural features of samples, while the application of ENVI-based spectral end member mapping methods revealed the distribution of Fe-bearing mineral phases (olivine, pyroxene and magnetite), along with plagioclase feldspars within samples. Samples included a broad range of lithologies and grain sizes. Our MMI-based petrogenetic interpretations compared favorably with thin section-based descriptions published in the Lunar Sample Compendium, revealing the value of MMI images for astronaut and rover-mediated lunar exploration.
Implementation of total focusing method for phased array ultrasonic imaging on FPGA
NASA Astrophysics Data System (ADS)
Guo, JianQiang; Li, Xi; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke
2015-02-01
This paper describes a multi-FPGA imaging system dedicated for the real-time imaging using the Total Focusing Method (TFM) and Full Matrix Capture (FMC). The system was entirely described using Verilog HDL language and implemented on Altera Stratix IV GX FPGA development board. The whole algorithm process is to: establish a coordinate system of image and divide it into grids; calculate the complete acoustic distance of array element between transmitting array element and receiving array element, and transform it into index value; then index the sound pressure values from ROM and superimpose sound pressure values to get pixel value of one focus point; and calculate the pixel values of all focus points to get the final imaging. The imaging result shows that this algorithm has high SNR of defect imaging. And FPGA with parallel processing capability can provide high speed performance, so this system can provide the imaging interface, with complete function and good performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favazza, C; Yu, L; Leng, S
2015-06-15
Purpose: To investigate using multiple CT image slices from a single acquisition as independent training images for a channelized Hotelling observer (CHO) model to reduce the number of repeated scans for CHO-based CT image quality assessment. Methods: We applied a previously validated CHO model to detect low contrast disk objects formed from cross-sectional images of three epoxy-resin-based rods (diameters: 3, 5, and 9 mm; length: ∼5cm). The rods were submerged in a 35x 25 cm2 iodine-doped water filled phantom, yielding-15 HU object contrast. The phantom was scanned 100 times with and without the rods present. Scan and reconstruction parameters include:more » 5 mm slice thickness at 0.5 mm intervals, 120 kV, 480 Quality Reference mAs, and a 128-slice scanner. The CHO’s detectability index was evaluated as a function of factors related to incorporating multi-slice image data: object misalignment along the z-axis, inter-slice pixel correlation, and number of unique slice locations. In each case, the CHO training set was fixed to 100 images. Results: Artificially shifting the object’s center position by as much as 3 pixels in any direction relative to the Gabor channel filters had insignificant impact on object detectability. An inter-slice pixel correlation of >∼0.2 yielded positive bias in the model’s performance. Incorporating multi-slice image data yielded slight negative bias in detectability with increasing number of slices, likely due to physical variations in the objects. However, inclusion of image data from up to 5 slice locations yielded detectability indices within measurement error of the single slice value. Conclusion: For the investigated model and task, incorporating image data from 5 different slice locations of at least 5 mm intervals into the CHO model yielded detectability indices within measurement error of the single slice value. Consequently, this methodology would Result in a 5-fold reduction in number of image acquisitions. This project was supported by National Institutes of Health grants R01 EB017095 and U01 EB017185 from the National Institute of Biomedical Imaging and Bioengineering.« less
Multimodality hard-x-ray imaging of a chromosome with nanoscale spatial resolution
Yan, Hanfei; Nazaretski, Evgeny; Lauer, Kenneth R.; ...
2016-02-05
Here, we developed a scanning hard x-ray microscope using a new class of x-ray nano-focusing optic called a multilayer Laue lens and imaged a chromosome with nanoscale spatial resolution. The combination of the hard x-ray's superior penetration power, high sensitivity to elemental composition, high spatial-resolution and quantitative analysis creates a unique tool with capabilities that other microscopy techniques cannot provide. Using this microscope, we simultaneously obtained absorption-, phase-, and fluorescence-contrast images of Pt-stained human chromosome samples. The high spatial-resolution of the microscope and its multi-modality imaging capabilities enabled us to observe the internal ultra-structures of a thick chromosome without sectioningmore » it.« less
Multimodality hard-x-ray imaging of a chromosome with nanoscale spatial resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Hanfei; Nazaretski, Evgeny; Lauer, Kenneth R.
Here, we developed a scanning hard x-ray microscope using a new class of x-ray nano-focusing optic called a multilayer Laue lens and imaged a chromosome with nanoscale spatial resolution. The combination of the hard x-ray's superior penetration power, high sensitivity to elemental composition, high spatial-resolution and quantitative analysis creates a unique tool with capabilities that other microscopy techniques cannot provide. Using this microscope, we simultaneously obtained absorption-, phase-, and fluorescence-contrast images of Pt-stained human chromosome samples. The high spatial-resolution of the microscope and its multi-modality imaging capabilities enabled us to observe the internal ultra-structures of a thick chromosome without sectioningmore » it.« less
Edge detection for optical synthetic aperture based on deep neural network
NASA Astrophysics Data System (ADS)
Tan, Wenjie; Hui, Mei; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin
2017-09-01
Synthetic aperture optics systems can meet the demands of the next-generation space telescopes being lighter, larger and foldable. However, the boundaries of segmented aperture systems are much more complex than that of the whole aperture. More edge regions mean more imaging edge pixels, which are often mixed and discretized. In order to achieve high-resolution imaging, it is necessary to identify the gaps between the sub-apertures and the edges of the projected fringes. In this work, we introduced the algorithm of Deep Neural Network into the edge detection of optical synthetic aperture imaging. According to the detection needs, we constructed image sets by experiments and simulations. Based on MatConvNet, a toolbox of MATLAB, we ran the neural network, trained it on training image set and tested its performance on validation set. The training was stopped when the test error on validation set stopped declining. As an input image is given, each intra-neighbor area around the pixel is taken into the network, and scanned pixel by pixel with the trained multi-hidden layers. The network outputs make a judgment on whether the center of the input block is on edge of fringes. We experimented with various pre-processing and post-processing techniques to reveal their influence on edge detection performance. Compared with the traditional algorithms or their improvements, our method makes decision on a much larger intra-neighbor, and is more global and comprehensive. Experiments on more than 2,000 images are also given to prove that our method outperforms classical algorithms in optical images-based edge detection.
Study on pixel matching method of the multi-angle observation from airborne AMPR measurements
NASA Astrophysics Data System (ADS)
Hou, Weizhen; Qie, Lili; Li, Zhengqiang; Sun, Xiaobing; Hong, Jin; Chen, Xingfeng; Xu, Hua; Sun, Bin; Wang, Han
2015-10-01
For the along-track scanning mode, the same place along the ground track could be detected by the Advanced Multi-angular Polarized Radiometer (AMPR) with several different scanning angles from -55 to 55 degree, which provides a possible means to get the multi-angular detection for some nearby pixels. However, due to the ground sample spacing and spatial footprint of the detection, the different sizes of footprints cannot guarantee the spatial matching of some partly overlap pixels, which turn into a bottleneck for the effective use of the multi-angular detected information of AMPR to study the aerosol and surface polarized properties. Based on our definition and calculation of t he pixel coincidence rate for the multi-angular detection, an effective multi-angle observation's pixel matching method is presented to solve the spatial matching problem for airborne AMPR. Assuming the shape of AMPR's each pixel is an ellipse, and the major axis and minor axis depends on the flying attitude and each scanning angle. By the definition of coordinate system and origin of coordinate, the latitude and longitude could be transformed into the Euclidian distance, and the pixel coincidence rate of two nearby ellipses could be calculated. Via the traversal of each ground pixel, those pixels with high coincidence rate could be selected and merged, and with the further quality control of observation data, thus the ground pixels dataset with multi-angular detection could be obtained and analyzed, providing the support for the multi-angular and polarized retrieval algorithm research in t he next study.
Yang, Hao; MacLaren, Ian; Jones, Lewys; ...
2017-04-01
Recent development in fast pixelated detector technology has allowed a two dimensional diffraction pattern to be recorded at every probe position of a two dimensional raster scan in a scanning transmission electron microscope (STEM), forming an information-rich four dimensional (4D) dataset. Electron ptychography has been shown to enable efficient coherent phase imaging of weakly scattering objects from a 4D dataset recorded using a focused electron probe, which is optimised for simultaneous incoherent Z-contrast imaging and spectroscopy in STEM. Thus coherent phase contrast and incoherent Z-contrast imaging modes can be efficiently combined to provide a good sensitivity of both light andmore » heavy elements at atomic resolution. Here, we explore the application of electron ptychography for atomic resolution imaging of strongly scattering crystalline specimens, and present experiments on imaging crystalline specimens including samples containing defects, under dynamical channelling conditions using an aberration corrected microscope. A ptychographic reconstruction method called Wigner distribution deconvolution (WDD) was implemented. Our experimental results and simulation results suggest that ptychography provides a readily interpretable phase image and great sensitivity for imaging light elements at atomic resolution in relatively thin crystalline materials.« less
Quantitative Immunofluorescence Analysis of Nucleolus-Associated Chromatin.
Dillinger, Stefan; Németh, Attila
2016-01-01
The nuclear distribution of eu- and heterochromatin is nonrandom, heterogeneous, and dynamic, which is mirrored by specific spatiotemporal arrangements of histone posttranslational modifications (PTMs). Here we describe a semiautomated method for the analysis of histone PTM localization patterns within the mammalian nucleus using confocal laser scanning microscope images of fixed, immunofluorescence stained cells as data source. The ImageJ-based process includes the segmentation of the nucleus, furthermore measurements of total fluorescence intensities, the heterogeneity of the staining, and the frequency of the brightest pixels in the region of interest (ROI). In the presented image analysis pipeline, the perinucleolar chromatin is selected as primary ROI, and the nuclear periphery as secondary ROI.
Multi-anode microchannel arrays. [for use in ground-based and spaceborne telescopes
NASA Technical Reports Server (NTRS)
Timothy, J. G.; Mount, G. H.; Bybee, R. L.
1979-01-01
The Multi-Anode Microchannel Arrays (MAMA's) are a family of photoelectric, photon-counting array detectors being developed for use in instruments on both ground-based and space-borne telescopes. These detectors combine high sensitivity and photometric stability with a high-resolution imaging capability. MAMA detectors can be operated in a windowless configuration at extreme-ultraviolet and soft X-ray wavelengths or in a sealed configuration at ultraviolet and visible wavelengths. Prototype MAMA detectors with up to 512 x 512 pixels are now being tested in the laboratory and telescope operation of a simple (10 x 10)-pixel visible-light detector has been initiated. The construction and modes-of-operation of the MAMA detectors are briefly described and performance data are presented.
Argus: a 16-pixel millimeter-wave spectrometer for the Green Bank Telescope
NASA Astrophysics Data System (ADS)
Sieth, Matthew; Devaraj, Kiruthika; Voll, Patricia; Church, Sarah; Gawande, Rohit; Cleary, Kieran; Readhead, Anthony C. S.; Kangaslahti, Pekka; Samoska, Lorene; Gaier, Todd; Goldsmith, Paul F.; Harris, Andrew I.; Gundersen, Joshua O.; Frayer, David; White, Steve; Egan, Dennis; Reeves, Rodrigo
2014-07-01
We report on the development of Argus, a 16-pixel spectrometer, which will enable fast astronomical imaging over the 85-116 GHz band. Each pixel includes a compact heterodyne receiver module, which integrates two InP MMIC low-noise amplifiers, a coupled-line bandpass filter and a sub-harmonic Schottky diode mixer. The receiver signals are routed to and from the multi-chip MMIC modules with multilayer high frequency printed circuit boards, which includes LO splitters and IF amplifiers. Microstrip lines on flexible circuitry are used to transport signals between temperature stages. The spectrometer frontend is designed to be scalable, so that the array design can be reconfigured for future instruments with hundreds of pixels. Argus is scheduled to be commissioned at the Robert C. Byrd Green Bank Telescope in late 2014. Preliminary data for the first Argus pixels are presented.
Cameras for digital microscopy.
Spring, Kenneth R
2013-01-01
This chapter reviews the fundamental characteristics of charge-coupled devices (CCDs) and related detectors, outlines the relevant parameters for their use in microscopy, and considers promising recent developments in the technology of detectors. Electronic imaging with a CCD involves three stages--interaction of a photon with the photosensitive surface, storage of the liberated charge, and readout or measurement of the stored charge. The most demanding applications in fluorescence microscopy may require as much as four orders of greater magnitude sensitivity. The image in the present-day light microscope is usually acquired with a CCD camera. The CCD is composed of a large matrix of photosensitive elements (often referred to as "pixels" shorthand for picture elements, which simultaneously capture an image over the entire detector surface. The light-intensity information for each pixel is stored as electronic charge and is converted to an analog voltage by a readout amplifier. This analog voltage is subsequently converted to a numerical value by a digitizer situated on the CCD chip, or very close to it. Several (three to six) amplifiers are required for each pixel, and to date, uniform images with a homogeneous background have been a problem because of the inherent difficulties of balancing the gain in all of the amplifiers. Complementary metal oxide semiconductor sensors also exhibit relatively high noise associated with the requisite high-speed switching. Both of these deficiencies are being addressed, and sensor performance is nearing that required for scientific imaging. Copyright © 1998 Elsevier Inc. All rights reserved.
Yoneyama, Takeshi; Watanabe, Tetsuyo; Kagawa, Hiroyuki; Hayashi, Yutaka; Nakada, Mitsutoshi
2017-03-01
In photodynamic diagnosis using 5-aminolevulinic acid (5-ALA), discrimination between the tumor and normal tissue is very important for a precise resection. However, it is difficult to distinguish between infiltrating tumor and normal regions in the boundary area. In this study, fluorescent intensity and bright spot analyses using a confocal microscope is proposed for the precise discrimination between infiltrating tumor and normal regions. From the 5-ALA-resected brain tumor tissue, the red fluorescent and marginal regions were sliced for observation under a confocal microscope. Hematoxylin and eosin (H&E) staining were performed on serial slices of the same tissue. According to the pathological inspection of the H&E slides, the tumor and infiltrating and normal regions on confocal microscopy images were investigated. From the fluorescent intensity of the image pixels, a histogram of pixel number with the same fluorescent intensity was obtained. The fluorescent bright spot sizes and total number were compared between the marginal and normal regions. The fluorescence intensity distribution and average intensity in the tumor were different from those in the normal region. The probability of a difference from the dark enhanced the difference between the tumor and the normal region. The bright spot size and number in the infiltrating tumor were different from those in the normal region. Fluorescence intensity analysis is useful to distinguish a tumor region, and a bright spot analysis is useful to distinguish between infiltrating tumor and normal regions. These methods will be important for the precise resection or photodynamic therapy of brain tumors. Copyright © 2016 Elsevier B.V. All rights reserved.
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
Computational scalability of large size image dissemination
NASA Astrophysics Data System (ADS)
Kooper, Rob; Bajcsy, Peter
2011-01-01
We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.
Bastidas, Camila Y; von Plessing, Carlos; Troncoso, José; Del P Castillo, Rosario
2018-04-15
Fourier Transform infrared imaging and multivariate analysis were used to identify, at the microscopic level, the presence of florfenicol (FF), a heavily-used antibiotic in the salmon industry, supplied to fishes in feed pellets for the treatment of salmonid rickettsial septicemia (SRS). The FF distribution was evaluated using Principal Component Analysis (PCA) and Augmented Multivariate Curve Resolution with Alternating Least Squares (augmented MCR-ALS) on the spectra obtained from images with pixel sizes of 6.25 μm × 6.25 μm and 1.56 μm × 1.56 μm, in different zones of feed pellets. Since the concentration of the drug was 3.44 mg FF/g pellet, this is the first report showing the powerful ability of the used of spectroscopic techniques and multivariate analysis, especially the augmented MCR-ALS, to describe the FF distribution in both the surface and inner parts of feed pellets at low concentration, in a complex matrix and at the microscopic level. The results allow monitoring the incorporation of the drug into the feed pellets. Copyright © 2018 Elsevier B.V. All rights reserved.
Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates
NASA Astrophysics Data System (ADS)
Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.
2010-04-01
Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.
Preparation of Murine Submandibular Salivary Gland for Upright Intravital Microscopy.
Ficht, Xenia; Thelen, Flavian; Stolp, Bettina; Stein, Jens V
2018-05-07
The submandibular salivary gland (SMG) is one of the three major salivary glands, and is of interest for many different fields of biological research, including cell biology, oncology, dentistry, and immunology. The SMG is an exocrine gland comprised of secretory epithelial cells, myofibroblasts, endothelial cells, nerves, and extracellular matrix. Dynamic cellular processes in the rat and mouse SMG have previously been imaged, mostly using inverted multi-photon microscope systems. Here, we describe a straightforward protocol for the surgical preparation and stabilization of the murine SMG in anesthetized mice for in vivo imaging with upright multi-photon microscope systems. We present representative intravital image sets of endogenous and adoptively transferred fluorescent cells, including the labeling of blood vessels or salivary ducts and second harmonic generation to visualize fibrillar collagen. In sum, our protocol allows for surgical preparation of mouse salivary glands in upright microscopy systems, which are commonly used for intravital imaging in the field of immunology.
COMPACT NON-CONTACT TOTAL EMISSION DETECTION FOR IN-VIVO MULTI-PHOTON EXCITATION MICROSCOPY
Glancy, Brian; Karamzadeh, Nader S.; Gandjbakhche, Amir H.; Redford, Glen; Kilborn, Karl; Knutson, Jay R.; Balaban, Robert S.
2014-01-01
Summary We describe a compact, non-contact design for a Total Emission Detection (c-TED) system for intra-vital multi-photon imaging. To conform to a standard upright two-photon microscope design, this system uses a parabolic mirror surrounding a standard microscope objective in concert with an optical path that does not interfere with normal microscope operation. The non-contact design of this device allows for maximal light collection without disrupting the physiology of the specimen being examined. Tests were conducted on exposed tissues in live animals to examine the emission collection enhancement of the c-TED device compared to heavily optimized objective-based emission collection. The best light collection enhancement was seen from murine fat (5×-2× gains as a function of depth), while murine skeletal muscle and rat kidney showed gains of over two and just under two-fold near the surface, respectively. Gains decreased with imaging depth (particularly in the kidney). Zebrafish imaging on a reflective substrate showed close to a two-fold gain throughout the entire volume of an intact embryo (approximately 150 μm deep). Direct measurement of bleaching rates confirmed that the lower laser powers (enabled by greater light collection efficiency) yielded reduced photobleaching in vivo. The potential benefits of increased light collection in terms of speed of imaging and reduced photo-damage, as well as the applicability of this device to other multi-photon imaging methods is discussed. PMID:24251437
True Color Image Analysis For Determination Of Bone Growth In Fluorochromic Biopsies
NASA Astrophysics Data System (ADS)
Madachy, Raymond J.; Chotivichit, Lee; Huang, H. K.; Johnson, Eric E.
1989-05-01
A true color imaging technique has been developed for analysis of microscopic fluorochromic bone biopsy images to quantify new bone growth. The technique searches for specified colors in a medical image for quantification of areas of interest. Based on a user supplied training set, a multispectral classification of pixel values is performed and used for segmenting the image. Good results were obtained when compared to manual tracings of new bone growth performed by an orthopedic surgeon. At a 95% confidence level, the hypothesis that there is no difference between the two methods can be accepted. Work is in progress to test bone biopsies with different colored stains and further optimize the analysis process using three-dimensional spectral ordering techniques.
A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.
Khan, Khan Bahadar; Khaliq, Amir A; Jalil, Abdul; Shahid, Muhammad
2018-01-01
The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.
Hierarchical rendering of trees from precomputed multi-layer z-buffers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Max, N.
1996-02-01
Chen and Williams show how precomputed z-buffer images from different fixed viewing positions can be reprojected to produce an image for a new viewpoint. Here images are precomputed for twigs and branches at various levels in the hierarchical structure of a tree, and adaptively combined, depending on the position of the new viewpoint. The precomputed images contain multiple z levels to avoid missing pixels in the reconstruction, subpixel masks for anti-aliasing, and colors and normals for shading after reprojection.
Tone mapping infrared images using conditional filtering-based multi-scale retinex
NASA Astrophysics Data System (ADS)
Luo, Haibo; Xu, Lingyun; Hui, Bin; Chang, Zheng
2015-10-01
Tone mapping can be used to compress the dynamic range of the image data such that it can be fitted within the range of the reproduction media and human vision. The original infrared images that captured with infrared focal plane arrays (IFPA) are high dynamic images, so tone mapping infrared images is an important component in the infrared imaging systems, and it has become an active topic in recent years. In this paper, we present a tone mapping framework using multi-scale retinex. Firstly, a Conditional Gaussian Filter (CGF) was designed to suppress "halo" effect. Secondly, original infrared image is decomposed into a set of images that represent the mean of the image at different spatial resolutions by applying CGF of different scale. And then, a set of images that represent the multi-scale details of original image is produced by dividing the original image pointwise by the decomposed image. Thirdly, the final detail image is reconstructed by weighted sum of the multi-scale detail images together. Finally, histogram scaling and clipping is adopted to remove outliers and scale the detail image, 0.1% of the pixels are clipped at both extremities of the histogram. Experimental results show that the proposed algorithm efficiently increases the local contrast while preventing "halo" effect and provides a good rendition of visual effect.
Ishida, Haruki; Kagawa, Keiichiro; Komuro, Takashi; Zhang, Bo; Seo, Min-Woong; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2018-01-01
A probabilistic method to remove the random telegraph signal (RTS) noise and to increase the signal level is proposed, and was verified by simulation based on measured real sensor noise. Although semi-photon-counting-level (SPCL) ultra-low noise complementary-metal-oxide-semiconductor (CMOS) image sensors (CISs) with high conversion gain pixels have emerged, they still suffer from huge RTS noise, which is inherent to the CISs. The proposed method utilizes a multi-aperture (MA) camera that is composed of multiple sets of an SPCL CIS and a moderately fast and compact imaging lens to emulate a very fast single lens. Due to the redundancy of the MA camera, the RTS noise is removed by the maximum likelihood estimation where noise characteristics are modeled by the probability density distribution. In the proposed method, the photon shot noise is also relatively reduced because of the averaging effect, where the pixel values of all the multiple apertures are considered. An extremely low-light condition that the maximum number of electrons per aperture was the only 2e− was simulated. PSNRs of a test image for simple averaging, selective averaging (our previous method), and the proposed method were 11.92 dB, 11.61 dB, and 13.14 dB, respectively. The selective averaging, which can remove RTS noise, was worse than the simple averaging because it ignores the pixels with RTS noise and photon shot noise was less improved. The simulation results showed that the proposed method provided the best noise reduction performance. PMID:29587424
Surface profile measurement by using the integrated Linnik WLSI and confocal microscope system
NASA Astrophysics Data System (ADS)
Wang, Wei-Chung; Shen, Ming-Hsing; Hwang, Chi-Hung; Yu, Yun-Ting; Wang, Tzu-Fong
2017-06-01
The white-light scanning interferometer (WLSI) and confocal microscope (CM) are the two major optical inspection systems for measuring three-dimensional (3D) surface profile (SP) of micro specimens. Nevertheless, in practical applications, WLSI is more suitable for measuring smooth and low-slope surfaces. On the other hand, CM is more suitable for measuring uneven-reflective and low-reflective surfaces. As for aspect of surface profiles to be measured, the characteristics of WLSI and CM are also different. WLSI is generally used in semiconductor industry while CM is more popular in printed circuit board industry. In this paper, a self-assembled multi-function optical system was integrated to perform Linnik white-light scanning interferometer (Linnik WLSI) and CM. A connecting part composed of tubes, lenses and interferometer was used to conjunct finite and infinite optical systems for Linnik WLSI and CM in the self-assembled optical system. By adopting the flexibility of tubes and lenses, switching to perform two different optical measurements can be easily achieved. Furthermore, based on the shape from focus method with energy of Laplacian filter, the CM was developed to enhance the on focal information of each pixel so that the CM can provide all-in-focus image for performing the 3D SP measurement and analysis simultaneously. As for Linnik WLSI, eleven-step phase shifting algorithm was used to analyze vertical scanning signals and determine the 3D SP.
NASA Astrophysics Data System (ADS)
Vercauteren, Tom; Doussoux, François; Cazaux, Matthieu; Schmid, Guillaume; Linard, Nicolas; Durin, Marie-Amélie; Gharbi, Hédi; Lacombe, François
2013-03-01
Since its inception in the field of in vivo imaging, endomicroscopy through optical fiber bundles, or probe-based Confocal Laser Endomicroscopy (pCLE), has extensively proven the benefit of in situ and real-time examination of living tissues at the microscopic scale. By continuously increasing image quality, reducing invasiveness and improving system ergonomics, Mauna Kea Technologies has turned pCLE not only into an irreplaceable research instrument for small animal imaging, but also into an accurate clinical decision making tool with applications as diverse as gastrointestinal endoscopy, pulmonology and urology. The current implementation of pCLE relies on a single fluorescence spectral band making different sources of in vivo information challenging to distinguish. Extending the pCLE approach to multi-color endomicroscopy therefore appears as a natural plan. Coupling simultaneous multi-laser excitation with minimally invasive, microscopic resolution, thin and flexible optics, allows the fusion of complementary and valuable biological information, thus paving the way to a combination of morphological and functional imaging. This paper will detail the architecture of a new system, Cellvizio Dual Band, capable of video rate in vivo and in situ multi-spectral fluorescence imaging with a microscopic resolution. In its standard configuration, the system simultaneously operates at 488 and 660 nm, where it automatically performs the necessary spectral, photometric and geometric calibrations to provide unambiguously co-registered images in real-time. The main hardware and software features, including calibration procedures and sub-micron registration algorithms, will be presented as well as a panorama of its current applications, illustrated with recent results in the field of pre-clinical imaging.
Forest Biomass Mapping from Stereo Imagery and Radar Data
NASA Astrophysics Data System (ADS)
Sun, G.; Ni, W.; Zhang, Z.
2013-12-01
Both InSAR and lidar data provide critical information on forest vertical structure, which are critical for regional mapping of biomass. However, the regional application of these data is limited by the availability and acquisition costs. Some researchers have demonstrated potentials of stereo imagery in the estimation of forest height. Most of these researches were conducted on aerial images or spaceborne images with very high resolutions (~0.5m). Space-born stereo imagers with global coverage such as ALOS/PRISM have coarser spatial resolutions (2-3m) to achieve wider swath. The features of stereo images are directly affected by resolutions and the approaches use by most of researchers need to be adjusted for stereo imagery with lower resolutions. This study concentrated on analyzing the features of point clouds synthesized from multi-view stereo imagery over forested areas. The small footprint lidar and lidar waveform data were used as references. The triplets of ALOS/PRISM data form three pairs (forward/nadir, backward/nadir and forward/backward) of stereo images. Each pair of the stereo images can be used to generate points (pixels) with 3D coordinates. By carefully co-register these points from three pairs of stereo images, a point cloud data was generated. The height of each point above ground surface was then calculated using DEM from National Elevation Dataset, USGS as the ground surface elevation. The height data were gridded into pixel of different sizes and the histograms of the points within a pixel were analyzed. The average height of the points within a pixel was used as the height of the pixel to generate a canopy height map. The results showed that the synergy of point clouds from different views were necessary, which increased the point density so the point cloud could detect the vertical structure of sparse and unclosed forests. The top layer of multi-layered forest could be captured but the dense forest prevented the stereo imagery to see through. The canopy height map exhibited spatial patterns of roads, forest edges and patches. The linear regression showed that the canopy height map had a good correlation with RH50 of LVIS data at 30m pixel size with a gain of 1.04, bias of 4.3m and R2 of 0.74 (Fig. 1). The canopy height map from PRISM and dual-pol PALSAR data were used together to map biomass in our study area near Howland, Maine, and the results were evaluated using biomass map generated from LVIS waveform data independently. The results showed that adding CHM from PRISM significantly improved biomass accuracy and raised the biomass saturation level of L-band SAR data in forest biomass mapping.
Probabilistic multi-resolution human classification
NASA Astrophysics Data System (ADS)
Tu, Jun; Ran, H.
2006-02-01
Recently there has been some interest in using infrared cameras for human detection because of the sharply decreasing prices of infrared cameras. The training data used in our work for developing the probabilistic template consists images known to contain humans in different poses and orientation but having the same height. Multiresolution templates are performed. They are based on contour and edges. This is done so that the model does not learn the intensity variations among the background pixels and intensity variations among the foreground pixels. Each template at every level is then translated so that the centroid of the non-zero pixels matches the geometrical center of the image. After this normalization step, for each pixel of the template, the probability of it being pedestrian is calculated based on the how frequently it appears as 1 in the training data. We also use periodicity gait to verify the pedestrian in a Bayesian manner for the whole blob in a probabilistic way. The videos had quite a lot of variations in the scenes, sizes of people, amount of occlusions and clutter in the backgrounds as is clearly evident. Preliminary experiments show the robustness.
A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.
Fattal, David; Peng, Zhen; Tran, Tho; Vo, Sonny; Fiorentino, Marco; Brug, Jim; Beausoleil, Raymond G
2013-03-21
Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously. They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking. None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles). Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre. The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide. Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane. To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees. We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch.
Toward one Giga frames per second--evolution of in situ storage image sensors.
Etoh, Takeharu G; Son, Dao V T; Yamada, Tetsuo; Charbon, Edoardo
2013-04-08
The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity, a BSI ISIS was developed. To avoid direct intrusion of light and migration of signal electrons to the storage area on the frontside, a cross-sectional sensor structure with thick pnpn layers was developed, and named "Tetratified structure". By folding and looping in-pixel storage CCDs, an image signal accumulation sensor, ISAS, is proposed. The ISAS has a new function, the in-pixel signal accumulation, in addition to the ultra-high-speed imaging. To achieve much higher frame rate, a multi-collection-gate (MCG) BSI image sensor architecture is proposed. The photoreceptive area forms a honeycomb-like shape. Performance of a hexagonal CCD-type MCG BSI sensor is examined by simulations. The highest frame rate is theoretically more than 1Gfps. For the near future, a stacked hybrid CCD/CMOS MCG image sensor seems most promising. The associated problems are discussed. A fine TSV process is the key technology to realize the structure.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
An acquisition system for CMOS imagers with a genuine 10 Gbit/s bandwidth
NASA Astrophysics Data System (ADS)
Guérin, C.; Mahroug, J.; Tromeur, W.; Houles, J.; Calabria, P.; Barbier, R.
2012-12-01
This paper presents a high data throughput acquisition system for pixel detector readout such as CMOS imagers. This CMOS acquisition board offers a genuine 10 Gbit/s bandwidth to the workstation and can provide an on-line and continuous high frame rate imaging capability. On-line processing can be implemented either on the Data Acquisition Board or on the multi-cores workstation depending on the complexity of the algorithms. The different parts composing the acquisition board have been designed to be used first with a single-photon detector called LUSIPHER (800×800 pixels), developed in our laboratory for scientific applications ranging from nano-photonics to adaptive optics. The architecture of the acquisition board is presented and the performances achieved by the produced boards are described. The future developments (hardware and software) concerning the on-line implementation of algorithms dedicated to single-photon imaging are tackled.
Compressed single pixel imaging in the spatial frequency domain
Torabzadeh, Mohammad; Park, Il-Yong; Bartels, Randy A.; Durkin, Anthony J.; Tromberg, Bruce J.
2017-01-01
Abstract. We have developed compressed sensing single pixel spatial frequency domain imaging (cs-SFDI) to characterize tissue optical properties over a wide field of view (35 mm×35 mm) using multiple near-infrared (NIR) wavelengths simultaneously. Our approach takes advantage of the relatively sparse spatial content required for mapping tissue optical properties at length scales comparable to the transport scattering length in tissue (ltr∼1 mm) and the high bandwidth available for spectral encoding using a single-element detector. cs-SFDI recovered absorption (μa) and reduced scattering (μs′) coefficients of a tissue phantom at three NIR wavelengths (660, 850, and 940 nm) within 7.6% and 4.3% of absolute values determined using camera-based SFDI, respectively. These results suggest that cs-SFDI can be developed as a multi- and hyperspectral imaging modality for quantitative, dynamic imaging of tissue optical and physiological properties. PMID:28300272
ASTER's First Views of Red Sea, Ethiopia - Thermal-Infrared (TIR) Image (monochrome)
NASA Technical Reports Server (NTRS)
2000-01-01
ASTER succeeded in acquiring this image at night, which is something Visible/Near Infrared VNIR) and Shortwave Infrared (SWIR) sensors cannot do. The scene covers the Red Sea coastline to an inland area of Ethiopia. White pixels represent areas with higher temperature material on the surface, while dark pixels indicate lower temperatures. This image shows ASTER's ability as a highly sensitive, temperature-discerning instrument and the first spaceborne TIR multi-band sensor in history.
The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately.The ASTER instrument was built in Japan for the Ministry of International Trade and Industry. A joint United States/Japan Science Team is responsible for instrument design, calibration, and data validation. ASTER is flying on the Terra satellite, which is managed by NASA's Goddard Space Flight Center, Greenbelt, MD.Goyal, Anish; Myers, Travis; Wang, Christine A; Kelly, Michael; Tyrrell, Brian; Gokden, B; Sanchez, Antonio; Turner, George; Capasso, Federico
2014-06-16
We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light reflected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s.
Malkusch, Wolf
2005-01-01
The enzyme-linked immunospot (ELISPOT) assay was originally developed for the detection of individual antibody secreting B-cells. Since then, the method has been improved, and ELISPOT is used for the determination of the production of tumor necrosis factor (TNF)-alpha, interferon (IFN)-gamma, or various interleukins (IL)-4, IL-5. ELISPOT measurements are performed in 96-well plates with nitrocellulose membranes either visually or by means of image analysis. Image analysis offers various procedures to overcome variable background intensity problems and separate true from false spots. ELISPOT readers offer a complete solution for precise and automatic evaluation of ELISPOT assays. Number, size, and intensity of each single spot can be determined, printed, or saved for further statistical evaluation. Cytokine spots are always round, but because of floating edges with the background, they have a nonsmooth borderline. Resolution is a key feature for a precise detection of ELISPOT. In standard applications shape and edge steepness are essential parameters in addition to size and color for an accurate spot recognition. These parameters need a minimum spot diameter of 6 pixels. Collecting one single image per well with a standard color camera with 750 x 560 pixels will result in a resolution much too low to get all of the spots in a specimen. IFN-gamma spots may have only 25 microm diameters, and TNF-alpha spots just 15 microm. A 750 x 560 pixel image of a 6-mm well has a pixel size of 12 microm, resulting in only 1 or 2 pixel for a spot. Using a precise microscope optic in combination with a high resolution (1300 x 1030 pixel) integrating digital color camera, and at least 2 x 2 images per well will result in a pixel size of 2.5 microm and, as a minimum, 6 pixel diameter per spot. New approaches try to detect two cytokines per cell at the same time (i.e., IFN-gamma and IL-5). Standard staining procedures produce brownish spots (horseradish peroxidase) and blue spots (alkaline phosphatase). Problems may occur with color overlaps from cells producing both cytokines, resulting in violet spots. The latest experiments therefore try to use fluorescence labels as a marker. Fluorescein isothiocyanate results in green spots and Rhodamine in red spots. Cells producing both cytokines appear yellow. These colors can be separated much easier than the violet, red, and blue, especially using a high resolution.
NASA Astrophysics Data System (ADS)
Thilker, David A.; Vinsen, K.; Galaxy Properties Key Project, PS1
2014-01-01
To measure resolved galactic physical properties unbiased by the mask of recent star formation and dust features, we are conducting a citizen-scientist enabled nearby galaxy survey based on the unprecedented optical (g,r,i,z,y) imaging from Pan-STARRS1 (PS1). The PS1 Optical Galaxy Survey (POGS) covers 3π steradians (75% of the sky), about twice the footprint of SDSS. Whenever possible we also incorporate ancillary multi-wavelength image data from the ultraviolet (GALEX) and infrared (WISE, Spitzer) spectral regimes. For each cataloged nearby galaxy with a reliable redshift estimate of z < 0.05 - 0.1 (dependent on donated CPU power), publicly-distributed computing is being harnessed to enable pixel-by-pixel spectral energy distribution (SED) fitting, which in turn provides maps of key physical parameters such as the local stellar mass surface density, crude star formation history, and dust attenuation. With pixel SED fitting output we will then constrain parametric models of galaxy structure in a more meaningful way than ordinarily achieved. In particular, we will fit multi-component (e.g. bulge, bar, disk) galaxy models directly to the distribution of stellar mass rather than surface brightness in a single band, which is often locally biased. We will also compute non-parametric measures of morphology such as concentration, asymmetry using the POGS stellar mass and SFR surface density images. We anticipate studying how galactic substructures evolve by comparing our results with simulations and against more distant imaging surveys, some of which which will also be processed in the POGS pipeline. The reliance of our survey on citizen-scientist volunteers provides a world-wide opportunity for education. We developed an interactive interface which highlights the science being produced by each volunteer’s own CPU cycles. The POGS project has already proven popular amongst the public, attracting about 5000 volunteers with nearly 12,000 participating computers, and is growing rapidly.
An algorithm for pavement crack detection based on multiscale space
NASA Astrophysics Data System (ADS)
Liu, Xiang-long; Li, Qing-quan
2006-10-01
Conventional human-visual and manual field pavement crack detection method and approaches are very costly, time-consuming, dangerous, labor-intensive and subjective. They possess various drawbacks such as having a high degree of variability of the measure results, being unable to provide meaningful quantitative information and almost always leading to inconsistencies in crack details over space and across evaluation, and with long-periodic measurement. With the development of the public transportation and the growth of the Material Flow System, the conventional method can far from meet the demands of it, thereby, the automatic pavement state data gathering and data analyzing system come to the focus of the vocation's attention, and developments in computer technology, digital image acquisition, image processing and multi-sensors technology made the system possible, but the complexity of the image processing always made the data processing and data analyzing come to the bottle-neck of the whole system. According to the above description, a robust and high-efficient parallel pavement crack detection algorithm based on Multi-Scale Space is proposed in this paper. The proposed method is based on the facts that: (1) the crack pixels in pavement images are darker than their surroundings and continuous; (2) the threshold values of gray-level pavement images are strongly related with the mean value and standard deviation of the pixel-grey intensities. The Multi-Scale Space method is used to improve the data processing speed and minimize the effectiveness caused by image noise. Experiment results demonstrate that the advantages are remarkable: (1) it can correctly discover tiny cracks, even from very noise pavement image; (2) the efficiency and accuracy of the proposed algorithm are superior; (3) its application-dependent nature can simplify the design of the entire system.
Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.
2007-09-01
We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Abdessetar, M.; Zhong, Y.
2017-09-01
Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).
Fabrication of silver tips for scanning tunneling microscope induced luminescence.
Zhang, C; Gao, B; Chen, L G; Meng, Q S; Yang, H; Zhang, R; Tao, X; Gao, H Y; Liao, Y; Dong, Z C
2011-08-01
We describe a reliable fabrication procedure of silver tips for scanning tunneling microscope (STM) induced luminescence experiments. The tip was first etched electrochemically to yield a sharp cone shape using selected electrolyte solutions and then sputter cleaned in ultrahigh vacuum to remove surface oxidation. The tip status, in particular the tip induced plasmon mode and its emission intensity, can be further tuned through field emission and voltage pulse. The quality of silver tips thus fabricated not only offers atomically resolved STM imaging, but more importantly, also allows us to perform challenging "color" photon mapping with emission spectra taken at each pixel simultaneously during the STM scan under relatively small tunnel currents and relatively short exposure time.
MREG V1.1 : a multi-scale image registration algorithm for SAR applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eichel, Paul H.
2013-08-01
MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962more » leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.« less
VizieR Online Data Catalog: SDSS-DR8 galaxies classified by WND-CHARM (Kuminski+, 2016)
NASA Astrophysics Data System (ADS)
Kuminski, E.; Shamir, L.
2016-06-01
The image analysis method used to classify the images is WND-CHARM (wndchrm; Shamir et al. 2008, BMC Source Code for Biology and Medicine, 3: 13; 2010PLSCB...6E0974S; 2013ascl.soft12002S), which first computes 2885 numerical descriptors from each SDSS image such as textures, edges, shapes), the statistical distribution of the pixel intensities, the polynomial decomposition of the image, and fractal features. These features are extracted from the raw pixels, as well as the image transforms and multi-order image transforms. See section 2 for further explanations. In a similar way than the catalog we also compiled a catalog of all objects with spectra in DR8. For each object, that catalog contains the spec ObjID, the R.A., the decl., the z, z error, the certainty of classification as elliptical, the certainty of classification as spiral, and the certainty of classification as a star. See section 3.1 for further explanations. (2 data files).
Statistical analysis of low-voltage EDS spectrum images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, I.M.
1998-03-01
The benefits of using low ({le}5 kV) operating voltages for energy-dispersive X-ray spectrometry (EDS) of bulk specimens have been explored only during the last few years. This paper couples low-voltage EDS with two other emerging areas of characterization: spectrum imaging of a computer chip manufactured by a major semiconductor company. Data acquisition was performed with a Philips XL30-FEG SEM operated at 4 kV and equipped with an Oxford super-ATW detector and XP3 pulse processor. The specimen was normal to the electron beam and the take-off angle for acquisition was 35{degree}. The microscope was operated with a 150 {micro}m diameter finalmore » aperture at spot size 3, which yielded an X-ray count rate of {approximately}2,000 s{sup {minus}1}. EDS spectrum images were acquired as Adobe Photoshop files with the 4pi plug-in module. (The spectrum images could also be stored as NIH Image files, but the raw data are automatically rescaled as maximum-contrast (0--255) 8-bit TIFF images -- even at 16-bit resolution -- which poses an inconvenience for quantitative analysis.) The 4pi plug-in module is designed for EDS X-ray mapping and allows simultaneous acquisition of maps from 48 elements plus an SEM image. The spectrum image was acquired by re-defining the energy intervals of 48 elements to form a series of contiguous 20 eV windows from 1.25 kV to 2.19 kV. A spectrum image of 450 x 344 pixels was acquired from the specimen with a sampling density of 50 nm/pixel and a dwell time of 0.25 live seconds per pixel, for a total acquisition time of {approximately}14 h. The binary data files were imported into Mathematica for analysis with software developed by the author at Oak Ridge National Laboratory. A 400 x 300 pixel section of the original image was analyzed. MSA required {approximately}185 Mbytes of memory and {approximately}18 h of CPU time on a 300 MHz Power Macintosh 9600.« less
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
NASA Astrophysics Data System (ADS)
Kato, T.; Kataoka, J.; Nakamori, T.; Kishimoto, A.; Yamamoto, S.; Sato, K.; Ishikawa, Y.; Yamamura, K.; Kawabata, N.; Ikeda, H.; Kamada, K.
2013-05-01
We report the development of a high spatial resolution tweezers-type coincidence gamma-ray camera for medical imaging. This application consists of large-area monolithic Multi-Pixel Photon Counters (MPPCs) and submillimeter pixelized scintillator matrices. The MPPC array has 4 × 4 channels with a three-side buttable, very compact package. For typical operational gain of 7.5 × 105 at + 20 °C, gain fluctuation over the entire MPPC device is only ± 5.6%, and dark count rates (as measured at the 1 p.e. level) amount to <= 400 kcps per channel. We selected Ce-doped (Lu,Y)2(SiO4)O (Ce:LYSO) and a brand-new scintillator, Ce-doped Gd3Al2Ga3O12 (Ce:GAGG) due to their high light yield and density. To improve the spatial resolution, these scintillators were fabricated into 15 × 15 matrices of 0.5 × 0.5 mm2 pixels. The Ce:LYSO and Ce:GAGG scintillator matrices were assembled into phosphor sandwich (phoswich) detectors, and then coupled to the MPPC array along with an acrylic light guide measuring 1 mm thick, and with summing operational amplifiers that compile the signals into four position-encoded analog outputs being used for signal readout. Spatial resolution of 1.1 mm was achieved with the coincidence imaging system using a 22Na point source. These results suggest that the gamma-ray imagers offer excellent potential for applications in high spatial medical imaging.
Binary CMOS image sensor with a gate/body-tied MOSFET-type photodetector for high-speed operation
NASA Astrophysics Data System (ADS)
Choi, Byoung-Soo; Jo, Sung-Hyun; Bae, Myunghan; Kim, Sang-Hwan; Shin, Jang-Kyoo
2016-05-01
In this paper, a binary complementary metal oxide semiconductor (CMOS) image sensor with a gate/body-tied (GBT) metal oxide semiconductor field effect transistor (MOSFET)-type photodetector is presented. The sensitivity of the GBT MOSFET-type photodetector, which was fabricated using the standard CMOS 0.35-μm process, is higher than the sensitivity of the p-n junction photodiode, because the output signal of the photodetector is amplified by the MOSFET. A binary image sensor becomes more efficient when using this photodetector. Lower power consumptions and higher speeds of operation are possible, compared to the conventional image sensors using multi-bit analog to digital converters (ADCs). The frame rate of the proposed image sensor is over 2000 frames per second, which is higher than those of the conventional CMOS image sensors. The output signal of an active pixel sensor is applied to a comparator and compared with a reference level. The 1-bit output data of the binary process is determined by this level. To obtain a video signal, the 1-bit output data is stored in the memory and is read out by horizontal scanning. The proposed chip is composed of a GBT pixel array (144 × 100), binary-process circuit, vertical scanner, horizontal scanner, and readout circuit. The operation mode can be selected from between binary mode and multi-bit mode.
High-definition Fourier transform infrared spectroscopic imaging of prostate tissue
NASA Astrophysics Data System (ADS)
Wrobel, Tomasz P.; Kwak, Jin Tae; Kadjacsy-Balla, Andre; Bhargava, Rohit
2016-03-01
Histopathology forms the gold standard for cancer diagnosis and therapy, and generally relies on manual examination of microscopic structural morphology within tissue. Fourier-Transform Infrared (FT-IR) imaging is an emerging vibrational spectroscopic imaging technique, especially in a High-Definition (HD) format, that provides the spatial specificity of microscopy at magnifications used in diagnostic surgical pathology. While it has been shown for standard imaging that IR absorption by tissue creates a strong signal where the spectrum at each pixel is a quantitative "fingerprint" of the molecular composition of the sample, here we show that this fingerprint also enables direct digital pathology without the need for stains or dyes for HD imaging. An assessment of the potential of HD imaging to improve diagnostic pathology accuracy is presented.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
NASA Astrophysics Data System (ADS)
Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher
2017-10-01
The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95% of the globe.
NASA Technical Reports Server (NTRS)
Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher
2017-01-01
The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-kilometer resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function (PSF) defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95 percent of the globe.
Srinivasan, Vivek J.; Mandeville, Emiri T.; Can, Anil; Blasi, Francesco; Climov, Mihail; Daneshmand, Ali; Lee, Jeong Hyun; Yu, Esther; Radhakrishnan, Harsha; Lo, Eng H.; Sakadžić, Sava; Eikermann-Haerter, Katharina; Ayata, Cenk
2013-01-01
Progress in experimental stroke and translational medicine could be accelerated by high-resolution in vivo imaging of disease progression in the mouse cortex. Here, we introduce optical microscopic methods that monitor brain injury progression using intrinsic optical scattering properties of cortical tissue. A multi-parametric Optical Coherence Tomography (OCT) platform for longitudinal imaging of ischemic stroke in mice, through thinned-skull, reinforced cranial window surgical preparations, is described. In the acute stages, the spatiotemporal interplay between hemodynamics and cell viability, a key determinant of pathogenesis, was imaged. In acute stroke, microscopic biomarkers for eventual infarction, including capillary non-perfusion, cerebral blood flow deficiency, altered cellular scattering, and impaired autoregulation of cerebral blood flow, were quantified and correlated with histology. Additionally, longitudinal microscopy revealed remodeling and flow recovery after one week of chronic stroke. Intrinsic scattering properties serve as reporters of acute cellular and vascular injury and recovery in experimental stroke. Multi-parametric OCT represents a robust in vivo imaging platform to comprehensively investigate these properties. PMID:23940761
Huard, Edouard; Derelle, Sophie; Jaeck, Julien; Nghiem, Jean; Haïdar, Riad; Primot, Jérôme
2018-03-05
A challenging point in the prediction of the image quality of infrared imaging systems is the evaluation of the detector modulation transfer function (MTF). In this paper, we present a linear method to get a 2D continuous MTF from sparse spectral data. Within the method, an object with a predictable sparse spatial spectrum is imaged by the focal plane array. The sparse data is then treated to return the 2D continuous MTF with the hypothesis that all the pixels have an identical spatial response. The linearity of the treatment is a key point to estimate directly the error bars of the resulting detector MTF. The test bench will be presented along with measurement tests on a 25 μm pitch InGaAs detector.
Measurement of RBC agglutination with microscopic cell image analysis in a microchannel chip.
Cho, Chi Hyun; Kim, Ju Yeon; Nyeck, Agnes E; Lim, Chae Seung; Hur, Dae Sung; Chung, Chanil; Chang, Jun Keun; An, Seong Soo A; Shin, Sehyun
2014-01-01
Since Landsteiner's discovery of ABO blood groups, RBC agglutination has been one of the most important immunohematologic techniques for ABO and RhD blood groupings. The conventional RBC agglutination grading system for RhD blood typings relies on macroscopic reading, followed by the assignment of a grade ranging from (-) to (4+) to the degree of red blood cells clumping. However, with the new scoring method introduced in this report, microscopically captured cell images of agglutinated RBCs, placed in a microchannel chip, are used for analysis. Indeed, the cell images' pixel number first allows the differentiation of agglutinated and non-agglutinated red blood cells. Finally, the ratio of agglutinated RBCs per total RBC counts (CRAT) from 90 captured images is then calculated. During the trial, it was observed that the agglutinated group's CRAT was significantly higher (3.77-0.003) than that of the normal control (0). Based on these facts, it was established that the microchannel method was more suitable for the discrimination between agglutinated RBCs and non-agglutinated RhD negative, and thus more reliable for the grading of RBCs agglutination than the conventional method.
A mini-microscope for in situ monitoring of cells.
Kim, Sang Bok; Koo, Kyo-in; Bae, Hojae; Dokmeci, Mehmet R; Hamilton, Geraldine A; Bahinski, Anthony; Kim, Sun Min; Ingber, Donald E; Khademhosseini, Ali
2012-10-21
A mini-microscope was developed for in situ monitoring of cells by modifying off-the-shelf components of a commercial webcam. The mini-microscope consists of a CMOS imaging module, a small plastic lens and a white LED illumination source. The CMOS imaging module was connected to a laptop computer through a USB port for image acquisition and analysis. Due to its compact size, 8 × 10 × 9 cm, the present microscope is portable and can easily fit inside a conventional incubator, and enables real-time monitoring of cellular behaviour. Moreover, the mini-microscope can be used for imaging cells in conventional cell culture flasks, such as Petri dishes and multi-well plates. To demonstrate the operation of the mini-microscope, we monitored the cellular migration of mouse 3T3 fibroblasts in a scratch assay in medium containing three different concentrations of fetal bovine serum (5, 10, and 20%) and demonstrated differential responses depending on serum levels. In addition, we seeded embryonic stem cells inside poly(ethylene glycol) microwells and monitored the formation of stem cell aggregates in real time using the mini-microscope. Furthermore, we also combined a lab-on-a-chip microfluidic device for microdroplet generation and analysis with the mini-microscope and observed the formation of droplets under different flow conditions. Given its cost effectiveness, robust imaging and portability, the presented platform may be useful for a range of applications for real-time cellular imaging using lab-on-a-chip devices at low cost.
A mini-microscope for in situ monitoring of cells†‡
Kim, Sang Bok; Koo, Kyo-in; Bae, Hojae; Dokmeci, Mehmet R.; Hamilton, Geraldine A.; Bahinski, Anthony; Kim, Sun Min; Ingber, Donald E.
2013-01-01
A mini-microscope was developed for in situ monitoring of cells by modifying off-the-shelf components of a commercial webcam. The mini-microscope consists of a CMOS imaging module, a small plastic lens and a white LED illumination source. The CMOS imaging module was connected to a laptop computer through a USB port for image acquisition and analysis. Due to its compact size, 8 × 10 × 9 cm, the present microscope is portable and can easily fit inside a conventional incubator, and enables real-time monitoring of cellular behaviour. Moreover, the mini-microscope can be used for imaging cells in conventional cell culture flasks, such as Petri dishes and multi-well plates. To demonstrate the operation of the mini-microscope, we monitored the cellular migration of mouse 3T3 fibroblasts in a scratch assay in medium containing three different concentrations of fetal bovine serum (5, 10, and 20%) and demonstrated differential responses depending on serum levels. In addition, we seeded embryonic stem cells inside poly(ethylene glycol) microwells and monitored the formation of stem cell aggregates in real time using the mini-microscope. Furthermore, we also combined a lab-on-a-chip microfluidic device for microdroplet generation and analysis with the mini-microscope and observed the formation of droplets under different flow conditions. Given its cost effectiveness, robust imaging and portability, the presented platform may be useful for a range of applications for real-time cellular imaging using lab-on-a-chip devices at low cost. PMID:22911426
Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification
NASA Astrophysics Data System (ADS)
Gao, G.; Zhang, M.; Gu, Y.
2017-05-01
Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".
Isakozawa, Shigeto; Fuse, Taishi; Amano, Junpei; Baba, Norio
2018-04-01
As alternatives to the diffractogram-based method in high-resolution transmission electron microscopy, a spot auto-focusing (AF) method and a spot auto-stigmation (AS) method are presented with a unique high-definition auto-correlation function (HD-ACF). The HD-ACF clearly resolves the ACF central peak region in small amorphous-thin-film images, reflecting the phase contrast transfer function. At a 300-k magnification for a 120-kV transmission electron microscope, the smallest areas used are 64 × 64 pixels (~3 nm2) for the AF and 256 × 256 pixels for the AS. A useful advantage of these methods is that the AF function has an allowable accuracy even for a low s/n (~1.0) image. A reference database on the defocus dependency of the HD-ACF by the pre-acquisition of through-focus amorphous-thin-film images must be prepared to use these methods. This can be very beneficial because the specimens are not limited to approximations of weak phase objects but can be extended to objects outside such approximations.
ERIC Educational Resources Information Center
Vitz, Ed
2010-01-01
A handheld digital microscope (HDM) interfaced to a computer with a presentation projector is used to project an out-of-focus yellow patch on the screen, then the patch is brought into focus to show that, paradoxically, there are red and green but no yellow pixels. Chromaticity diagrams are used to discuss this observation and spectroscopic…
Image alignment for tomography reconstruction from synchrotron X-ray microscopic images.
Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai
2014-01-01
A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the "projected feature points" in the sequence of images. The matched projected feature points in the x-θ plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx.
Robotics and dynamic image analysis for studies of gene expression in plant tissues.
Hernandez-Garcia, Carlos M; Chiera, Joseph M; Finer, John J
2010-05-05
Gene expression in plant tissues is typically studied by destructive extraction of compounds from plant tissues for in vitro analyses. The methods presented here utilize the green fluorescent protein (gfp) gene for continual monitoring of gene expression in the same pieces of tissues, over time. The gfp gene was placed under regulatory control of different promoters and introduced into lima bean cotyledonary tissues via particle bombardment. Cotyledons were then placed on a robotic image collection system, which consisted of a fluorescence dissecting microscope with a digital camera and a 2-dimensional robotics platform custom-designed to allow secure attachment of culture dishes. Images were collected from cotyledonary tissues every hour for 100 hours to generate expression profiles for each promoter. Each collected series of 100 images was first subjected to manual image alignment using ImageReady to make certain that GFP-expressing foci were consistently retained within selected fields of analysis. Specific regions of the series measuring 300 x 400 pixels, were then selected for further analysis to provide GFP Intensity measurements using ImageJ software. Batch images were separated into the red, green and blue channels and GFP-expressing areas were identified using the threshold feature of ImageJ. After subtracting the background fluorescence (subtraction of gray values of non-expressing pixels from every pixel) in the respective red and green channels, GFP intensity was calculated by multiplying the mean grayscale value per pixel by the total number of GFP-expressing pixels in each channel, and then adding those values for both the red and green channels. GFP Intensity values were collected for all 100 time points to yield expression profiles. Variations in GFP expression profiles resulted from differences in factors such as promoter strength, presence of a silencing suppressor, or nature of the promoter. In addition to quantification of GFP intensity, the image series were also used to generate time-lapse animations using ImageReady. Time-lapse animations revealed that the clear majority of cells displayed a relatively rapid increase in GFP expression, followed by a slow decline. Some cells occasionally displayed a sudden loss of fluorescence, which may be associated with rapid cell death. Apparent transport of GFP across the membrane and cell wall to adjacent cells was also observed. Time lapse animations provided additional information that could not otherwise be obtained using GFP Intensity profiles or single time point image collections.
Spectral imaging as a potential tool for optical sentinel lymph node biopsies
NASA Astrophysics Data System (ADS)
O'Sullivan, Jack D.; Hoy, Paul R.; Rutt, Harvey N.
2011-07-01
Sentinel Lymph Node Biopsy (SLNB) is an increasingly standard procedure to help oncologists accurately stage cancers. It is performed as an alternative to full axillary lymph node dissection in breast cancer patients, reducing the risk of longterm health problems associated with lymph node removal. Intraoperative analysis is currently performed using touchprint cytology, which can introduce significant delay into the procedure. Spectral imaging is forming a multi-plane image where reflected intensities from a number of spectral bands are recorded at each pixel in the spatial plane. We investigate the possibility of using spectral imaging to assess sentinel lymph nodes of breast cancer patients with a view to eventually developing an optical technique that could significantly reduce the time required to perform this procedure. We investigate previously reported spectra of normal and metastatic tissue in the visible and near infrared region, using them as the basis of dummy spectral images. We analyse these images using the spectral angle map (SAM), a tool routinely used in other fields where spectral imaging is prevalent. We simulate random noise in these images in order to determine whether the SAM can discriminate between normal and metastatic pixels as the quality of the images deteriorates. We show that even in cases where noise levels are up to 20% of the maximum signal, the spectral angle map can distinguish healthy pixels from metastatic. We believe that this makes spectral imaging a good candidate for further study in the development of an optical SLNB.
Zhao, Ming; Li, Yu; Peng, Leilei
2014-05-05
We present a novel excitation-emission multiplexed fluorescence lifetime microscopy (FLIM) method that surpasses current FLIM techniques in multiplexing capability. The method employs Fourier multiplexing to simultaneously acquire confocal fluorescence lifetime images of multiple excitation wavelength and emission color combinations at 44,000 pixels/sec. The system is built with low-cost CW laser sources and standard PMTs with versatile spectral configuration, which can be implemented as an add-on to commercial confocal microscopes. The Fourier lifetime confocal method allows fast multiplexed FLIM imaging, which makes it possible to monitor multiple biological processes in live cells. The low cost and compatibility with commercial systems could also make multiplexed FLIM more accessible to biological research community.
Development of fast parallel multi-technique scanning X-ray imaging at Synchrotron Soleil
NASA Astrophysics Data System (ADS)
Medjoubi, K.; Leclercq, N.; Langlois, F.; Buteau, A.; Lé, S.; Poirier, S.; Mercère, P.; Kewish, C. M.; Somogyi, A.
2013-10-01
A fast multimodal scanning X-ray imaging scheme is prototyped at Soleil Synchrotron. It permits the simultaneous acquisition of complementary information on the sample structure, composition and chemistry by measuring transmission, differential phase contrast, small-angle scattering, and X-ray fluorescence by dedicated detectors with ms dwell time per pixel. The results of the proof of principle experiments are presented in this paper.
NASA Astrophysics Data System (ADS)
Farmer, J. D.; Nunez, J. I.; Sellar, R. G.; Gardner, P. B.; Manatt, K. S.; Dingizian, A.; Dudik, M. J.; McDonnell, G.; Le, T.; Thomas, J. A.; Chu, K.
2011-12-01
The Multispectral Microscopic Imager (MMI) is a prototype instrument presently under development for future astrobiological missions to Mars. The MMI is designed to be a arm-mounted rover instrument for use in characterizing the microtexture and mineralogy of materials along geological traverses [1,2,3]. Such geological information is regarded as essential for interpreting petrogenesis and geological history, and when acquired in near real-time, can support hypothesis-driven exploration and optimize science return. Correlated microtexure and mineralogy also provides essential data for selecting samples for analysis with onboard lab instruments, and for prioritizing samples for potential Earth return. The MMI design employs multispectral light-emitting diodes (LEDs) and an uncooled focal plane array to achieve the low-mass (<1kg), low-cost, and high reliability (no moving parts) required for an arm-mounted instrument on a planetary rover [2,3]. The MMI acquires multispectral, reflectance images at 62 μm/pixel, in which each image pixel is comprised of a 21-band VNIR spectrum (0.46 to 1.73 μm). This capability enables the MMI to discriminate and resolve the spatial distribution of minerals and textures at the microscale [2, 3]. By extending the spectral range into the infrared, and increasing the number of spectral bands, the MMI exceeds the capabilities of current microimagers, including the MER Microscopic Imager (MI); 4, the Phoenix mission Robotic Arm Camera (RAC; 5) and the Mars Science Laboratory's Mars Hand Lens Imager (MAHLI; 6). In this report we will review the capabilities of the MMI by highlighting recent lab and field applications, including: 1) glove box deployments in the Astromaterials lab at Johnson Space Center to analyze Apollo lunar samples; 2) GeoLab glove box deployments during the 2011 Desert RATS field trials in northern AZ to characterize analog materials collected by astronauts during simulated EVAs; 3) field deployments on Mauna Kea Volcano, Hawaii, during NASA's 2010 ISRU field trials, to analyze materials at the primary feedstock mining site; 4) lab characterization of geological samples from a complex, volcanic-hydrothermal terrain in the Cady Mts., SE Mojave Desert, California. We will show how field and laboratory applications have helped drive the development and refinement of MMI capabilities, while identifying synergies with other potential payload instruments (e.g. X-ray Diffraction) for solving real geological problems.
Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
NASA Astrophysics Data System (ADS)
Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
GENIE: a hybrid genetic algorithm for feature classification in multispectral images
NASA Astrophysics Data System (ADS)
Perkins, Simon J.; Theiler, James P.; Brumby, Steven P.; Harvey, Neal R.; Porter, Reid B.; Szymanski, John J.; Bloch, Jeffrey J.
2000-10-01
We consider the problem of pixel-by-pixel classification of a multi- spectral image using supervised learning. Conventional spuervised classification techniques such as maximum likelihood classification and less conventional ones s uch as neural networks, typically base such classifications solely on the spectral components of each pixel. It is easy to see why: the color of a pixel provides a nice, bounded, fixed dimensional space in which these classifiers work well. It is often the case however, that spectral information alone is not sufficient to correctly classify a pixel. Maybe spatial neighborhood information is required as well. Or maybe the raw spectral components do not themselves make for easy classification, but some arithmetic combination of them would. In either of these cases we have the problem of selecting suitable spatial, spectral or spatio-spectral features that allow the classifier to do its job well. The number of all possible such features is extremely large. How can we select a suitable subset? We have developed GENIE, a hybrid learning system that combines a genetic algorithm that searches a space of image processing operations for a set that can produce suitable feature planes, and a more conventional classifier which uses those feature planes to output a final classification. In this paper we show that the use of a hybrid GA provides significant advantages over using either a GA alone or more conventional classification methods alone. We present results using high-resolution IKONOS data, looking for regions of burned forest and for roads.
Early Validation of Sentinel-2 L2A Processor and Products
NASA Astrophysics Data System (ADS)
Pflug, Bringfried; Main-Knorn, Magdalena; Bieniarz, Jakub; Debaecker, Vincent; Louis, Jerome
2016-08-01
Sentinel-2 is a constellation of two polar orbiting satellite units each one equipped with an optical imaging sensor MSI (Multi-Spectral Instrument). Sentinel-2A was launched on June 23, 2015 and Sentinel-2B will follow in 2017.The Level-2A (L2A) processor Sen2Cor implemented for Sentinel-2 data provides a scene classification image, aerosol optical thickness (AOT) and water vapour (WV) maps and the Bottom-Of-Atmosphere (BOA) corrected reflectance product. First validation results of Sen2Cor scene classification showed an overall accuracy of 81%. AOT at 550 nm is estimated by Sen2Cor with uncertainty of 0.035 for cloudless images and locations with dense dark vegetation (DDV) pixels present in the image. Aerosol estimation fails if the image contains no DDV-pixels. Mean difference between Sen2Cor WV and ground-truth is 0.29 cm. Uncertainty of up to 0.04 was found for the BOA- reflectance product.
Geiger-mode avalanche photodiode focal plane arrays for three-dimensional imaging LADAR
NASA Astrophysics Data System (ADS)
Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir; Zalud, Peter F.; Senko, Tom; Tower, John; Ferraro, Joseph
2010-09-01
We report on the development of focal plane arrays (FPAs) employing two-dimensional arrays of InGaAsP-based Geiger-mode avalanche photodiodes (GmAPDs). These FPAs incorporate InP/InGaAs(P) Geiger-mode avalanche photodiodes (GmAPDs) to create pixels that detect single photons at shortwave infrared wavelengths with high efficiency and low dark count rates. GmAPD arrays are hybridized to CMOS read-out integrated circuits (ROICs) that enable independent laser radar (LADAR) time-of-flight measurements for each pixel, providing three-dimensional image data at frame rates approaching 200 kHz. Microlens arrays are used to maintain high fill factor of greater than 70%. We present full-array performance maps for two different types of sensors optimized for operation at 1.06 μm and 1.55 μm, respectively. For the 1.06 μm FPAs, overall photon detection efficiency of >40% is achieved at <20 kHz dark count rates with modest cooling to ~250 K using integrated thermoelectric coolers. We also describe the first evalution of these FPAs when multi-photon pulses are incident on single pixels. The effective detection efficiency for multi-photon pulses shows excellent agreement with predictions based on Poisson statistics. We also characterize the crosstalk as a function of pulse mean photon number. Relative to the intrinsic crosstalk contribution from hot carrier luminescence that occurs during avalanche current flows resulting from single incident photons, we find a modest rise in crosstalk for multi-photon incident pulses that can be accurately explained by direct optical scattering.
a Data Field Method for Urban Remotely Sensed Imagery Classification Considering Spatial Correlation
NASA Astrophysics Data System (ADS)
Zhang, Y.; Qin, K.; Zeng, C.; Zhang, E. B.; Yue, M. X.; Tong, X.
2016-06-01
Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary's C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM) for the classification of multi-features (e.g. the spectral feature and spatial correlation feature). In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.
NASA Astrophysics Data System (ADS)
Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu
2018-06-01
Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.
Autofocus and fusion using nonlinear correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabazos-Marín, Alma Rocío; Álvarez-Borrego, Josué, E-mail: josue@cicese.mx; Coronel-Beltrán, Ángel
2014-10-06
In this work a new algorithm is proposed for auto focusing and images fusion captured by microscope's CCD. The proposed algorithm for auto focusing implements the spiral scanning of each image in the stack f(x, y){sub w} to define the V{sub w} vector. The spectrum of the vector FV{sub w} is calculated by fast Fourier transform. The best in-focus image is determined by a focus measure that is obtained by the FV{sub 1} nonlinear correlation vector, of the reference image, with each other FV{sub W} images in the stack. In addition, fusion is performed with a subset of selected imagesmore » f(x, y){sub SBF} like the images with best focus measurement. Fusion creates a new improved image f(x, y){sub F} with the selection of pixels of higher intensity.« less
SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation.
Xue, Yuan; Xu, Tao; Zhang, Han; Long, L Rodney; Huang, Xiaolei
2018-05-03
Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L 1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.
Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János
2016-04-01
Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Ooe, Shintaro; Todoroki, Shinsuke; Asamizu, Erika
2013-05-01
To evaluate the functional pigments in the tomato fruits nondestructively, we propose a method based on the multispectral diffuse reflectance images estimated by the Wiener estimation for a digital RGB image. Each pixel of the multispectral image is converted to the absorbance spectrum and then analyzed by the multiple regression analysis to visualize the contents of chlorophyll a, lycopene and β-carotene. The result confirms the feasibility of the method for in situ imaging of chlorophyll a, β-carotene and lycopene in the tomato fruits.
NASA Astrophysics Data System (ADS)
Reynolds, Jeffery S.; Thompson, Alan B.; Troy, Tamara L.; Mayer, Ralf H.; Waters, David J.; Sevick-Muraca, Eva M.
1999-07-01
In this paper we demonstrate the ability to detect the frequency-domain fluorescent signal from the contrast agent indocyanine green within the mammary chain of dogs with spontaneous mammary tumors. We use a gain-modulated image intensifier to rapidly capture multi-pixel images of the fluorescent modulation amplitude, modulation phase, and average intensity signals. Excitation is provided by a 100 MHz amplitude-modulated, 780 nm laser diode. Time series images of the uptake and clearance of the contrast agent in the diseased tissue are also presented.
NASA Astrophysics Data System (ADS)
Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao
A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.
Lensless transport-of-intensity phase microscopy and tomography with a color LED matrix
NASA Astrophysics Data System (ADS)
Zuo, Chao; Sun, Jiasong; Zhang, Jialin; Hu, Yan; Chen, Qian
2015-07-01
We demonstrate lens-less quantitative phase microscopy and diffraction tomography based on a compact on-chip platform, using only a CMOS image sensor and a programmable color LED array. Based on multi-wavelength transport-of- intensity phase retrieval and multi-angle illumination diffraction tomography, this platform offers high quality, depth resolved images with a lateral resolution of ˜3.7μm and an axial resolution of ˜5μm, over wide large imaging FOV of 24mm2. The resolution and FOV can be further improved by using a larger image sensors with small pixels straightforwardly. This compact, low-cost, robust, portable platform with a decent imaging performance may offer a cost-effective tool for telemedicine needs, or for reducing health care costs for point-of-care diagnostics in resource-limited environments.
NASA Astrophysics Data System (ADS)
Ocampo Giraldo, L.; Bolotnikov, A. E.; Camarda, G. S.; De Geronimo, G.; Fried, J.; Gul, R.; Hodges, D.; Hossain, A.; Ünlü, K.; Vernon, E.; Yang, G.; James, R. B.
2018-03-01
We evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enabling use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 μm (650 nm) to scan over a selected 3 × 3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.
Giraldo, L. Ocampo; Bolotnikov, A. E.; Camarda, G. S.; ...
2017-12-18
Here, we evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μμm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enablingmore » use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 m (650 nm) to scan over a selected 3×3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giraldo, L. Ocampo; Bolotnikov, A. E.; Camarda, G. S.
Here, we evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μμm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enablingmore » use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 m (650 nm) to scan over a selected 3×3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.« less
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
Demonstration of 1024x1024 pixel dual-band QWIP focal plane array
NASA Astrophysics Data System (ADS)
Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Mumolo, J. M.; Ting, D. Z.; Hill, C. J.; Nguyen, J.; Rafol, S. B.
2010-04-01
QWIPs are well known for their stability, high pixel-pixel uniformity and high pixel operability which are quintessential parameters for large area imaging arrays. In this paper we report the first demonstration of the megapixel-simultaneously-readable and pixel-co-registered dual-band QWIP focal plane array (FPA). The dual-band QWIP device was developed by stacking two multi-quantum-well stacks tuned to absorb two different infrared wavelengths. The full width at half maximum (FWHM) of the mid-wave infrared (MWIR) band extends from 4.4 - 5.1 μm and FWHM of the long-wave infrared (LWIR) band extends from 7.8 - 8.8 μm. Dual-band QWIP detector arrays were hybridized with direct injection 30 μm pixel pitch megapixel dual-band simultaneously readable CMOS read out integrated circuits using the indium bump hybridization technique. The initial dual-band megapixel QWIP FPAs were cooled to 68K operating temperature. The preliminary data taken from the first megapixel QWIP FPA has shown system NE▵T of 27 and 40 mK for MWIR and LWIR bands respectively.
Multi-anode microchannel arrays - New detectors for imaging and spectroscopy in space
NASA Technical Reports Server (NTRS)
Timothy, J. G.; Bybee, R. L.
1983-01-01
Consideration is given to the construction and operation of multi-anode microchannel array detector systems having formats as large as 256 x 1024 pixels. Such arrays are being developed for imaging and spectroscopy at soft X-ray, ultraviolet and visible wavelengths from balloons, sounding rockets and space probes. Both discrete-anode and coincidence-anode arrays are described. Two types of photocathode structures are evaluated: an opaque photocathode deposited directly on the curved-channel MCP and an activated cathode deposited on a proximity-focused mesh. Future work will include sensitivity optimization in the different wavelength regions and the development of detector tubes with semitransparent proximity-focused photocathodes.
eSIP: A Novel Solution-Based Sectioned Image Property Approach for Microscope Calibration
Butzlaff, Malte; Weigel, Arwed; Ponimaskin, Evgeni; Zeug, Andre
2015-01-01
Fluorescence confocal microscopy represents one of the central tools in modern sciences. Correspondingly, a growing amount of research relies on the development of novel microscopic methods. During the last decade numerous microscopic approaches were developed for the investigation of various scientific questions. Thereby, the former qualitative imaging methods became replaced by advanced quantitative methods to gain more and more information from a given sample. However, modern microscope systems being as complex as they are, require very precise and appropriate calibration routines, in particular when quantitative measurements should be compared over longer time scales or between different setups. Multispectral beads with sub-resolution size are often used to describe the point spread function and thus the optical properties of the microscope. More recently, a fluorescent layer was utilized to describe the axial profile for each pixel, which allows a spatially resolved characterization. However, fabrication of a thin fluorescent layer with matching refractive index is technically not solved yet. Therefore, we propose a novel type of calibration concept for sectioned image property (SIP) measurements which is based on fluorescent solution and makes the calibration concept available for a broader number of users. Compared to the previous approach, additional information can be obtained by application of this extended SIP chart approach, including penetration depth, detected number of photons, and illumination profile shape. Furthermore, due to the fit of the complete profile, our method is less susceptible to noise. Generally, the extended SIP approach represents a simple and highly reproducible method, allowing setup independent calibration and alignment procedures, which is mandatory for advanced quantitative microscopy. PMID:26244982
A novel pixellated solid-state photon detector for enhancing the Everhart-Thornley detector.
Chuah, Joon Huang; Holburn, David
2013-06-01
This article presents a pixellated solid-state photon detector designed specifically to improve certain aspects of the existing Everhart-Thornley detector. The photon detector was constructed and fabricated in an Austriamicrosystems 0.35 µm complementary metal-oxide-semiconductor process technology. This integrated circuit consists of an array of high-responsivity photodiodes coupled to corresponding low-noise transimpedance amplifiers, a selector-combiner circuit and a variable-gain postamplifier. Simulated and experimental results show that the photon detector can achieve a maximum transimpedance gain of 170 dBΩ and minimum bandwidth of 3.6 MHz. It is able to detect signals with optical power as low as 10 nW and produces a minimum signal-to-noise ratio (SNR) of 24 dB regardless of gain configuration. The detector has been proven to be able to effectively select and combine signals from different pixels. The key advantages of this detector are smaller dimensions, higher cost effectiveness, lower voltage and power requirements and better integration. The photon detector supports pixel-selection configurability which may improve overall SNR and also potentially generate images for different analyses. This work has contributed to the future research of system-level integration of a pixellated solid-state detector for secondary electron detection in the scanning electron microscope. Copyright © 2013 Wiley Periodicals, Inc.
Wang, Weibo; Wang, Chao; Liu, Jian; Tan, Jiubin
2016-01-01
We present an approach for an initial configuration design based on obscuration constraint and on-axis Taylor series expansion to realize the design of long working distance microscope (numerical aperture (NA) = 0.13 and working distance (WD) = 525 mm) with a low obscuration aspherical Schwarzschild objective in wide-spectrum imaging (λ = 400–900 nm). Experiments of the testing on the resolution target and inspection on United States Air Force (USAF) resolution chart and a line charge-coupled device (CCD) (pixel size of 14 μm × 56 μm) with different wavelength light sources (λ = 480 nm, 550 nm, 660 nm, 850 nm) were implemented to verify the validity of the proposed method. PMID:27834874
Nonlinear vibrational microscopy
Holtom, Gary R.; Xie, Xiaoliang Sunney; Zumbusch, Andreas
2000-01-01
The present invention is a method and apparatus for microscopic vibrational imaging using coherent Anti-Stokes Raman Scattering or Sum Frequency Generation. Microscopic imaging with a vibrational spectroscopic contrast is achieved by generating signals in a nonlinear optical process and spatially resolved detection of the signals. The spatial resolution is attained by minimizing the spot size of the optical interrogation beams on the sample. Minimizing the spot size relies upon a. directing at least two substantially co-axial laser beams (interrogation beams) through a microscope objective providing a focal spot on the sample; b. collecting a signal beam together with a residual beam from the at least two co-axial laser beams after passing through the sample; c. removing the residual beam; and d. detecting the signal beam thereby creating said pixel. The method has significantly higher spatial resolution then IR microscopy and higher sensitivity than spontaneous Raman microscopy with much lower average excitation powers. CARS and SFG microscopy does not rely on the presence of fluorophores, but retains the resolution and three-dimensional sectioning capability of confocal and two-photon fluorescence microscopy. Complementary to these techniques, CARS and SFG microscopy provides a contrast mechanism based on vibrational spectroscopy. This vibrational contrast mechanism, combined with an unprecedented high sensitivity at a tolerable laser power level, provides a new approach for microscopic investigations of chemical and biological samples.
Development of an imaging method for quantifying a large digital PCR droplet
NASA Astrophysics Data System (ADS)
Huang, Jen-Yu; Lee, Shu-Sheng; Hsu, Yu-Hsiang
2017-02-01
Portable devices have been recognized as the future linkage between end-users and lab-on-a-chip devices. It has a user friendly interface and provides apps to interface headphones, cameras, and communication duct, etc. In particular, the digital resolution of cameras installed in smartphones or pads already has a high imaging resolution with a high number of pixels. This unique feature has triggered researches to integrate optical fixtures with smartphone to provide microscopic imaging capabilities. In this paper, we report our study on developing a portable diagnostic tool based on the imaging system of a smartphone and a digital PCR biochip. A computational algorithm is developed to processing optical images taken from a digital PCR biochip with a smartphone in a black box. Each reaction droplet is recorded in pixels and is analyzed in a sRGB (red, green, and blue) color space. Multistep filtering algorithm and auto-threshold algorithm are adopted to minimize background noise contributed from ccd cameras and rule out false positive droplets, respectively. Finally, a size-filtering method is applied to identify the number of positive droplets to quantify target's concentration. Statistical analysis is then performed for diagnostic purpose. This process can be integrated in an app and can provide a user friendly interface without professional training.
Supervised pixel classification for segmenting geographic atrophy in fundus autofluorescene images
NASA Astrophysics Data System (ADS)
Hu, Zhihong; Medioni, Gerard G.; Hernandez, Matthias; Sadda, SriniVas R.
2014-03-01
Age-related macular degeneration (AMD) is the leading cause of blindness in people over the age of 65. Geographic atrophy (GA) is a manifestation of the advanced or late-stage of the AMD, which may result in severe vision loss and blindness. Techniques to rapidly and precisely detect and quantify GA lesions would appear to be of important value in advancing the understanding of the pathogenesis of GA and the management of GA progression. The purpose of this study is to develop an automated supervised pixel classification approach for segmenting GA including uni-focal and multi-focal patches in fundus autofluorescene (FAF) images. The image features include region wise intensity (mean and variance) measures, gray level co-occurrence matrix measures (angular second moment, entropy, and inverse difference moment), and Gaussian filter banks. A k-nearest-neighbor (k-NN) pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. A voting binary iterative hole filling filter is then applied to fill in the small holes. Sixteen randomly chosen FAF images were obtained from sixteen subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by certified graders. Two-fold cross-validation is applied for the evaluation of the classification performance. The mean Dice similarity coefficients (DSC) between the algorithm- and manually-defined GA regions are 0.84 +/- 0.06 for one test and 0.83 +/- 0.07 for the other test and the area correlations between them are 0.99 (p < 0.05) and 0.94 (p < 0.05) respectively.
Regional shape-based feature space for segmenting biomedical images using neural networks
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Gopal; Hoford, John D.; Hoffman, Eric A.
1993-07-01
In biomedical images, structure of interest, particularly the soft tissue structures, such as the heart, airways, bronchial and arterial trees often have grey-scale and textural characteristics similar to other structures in the image, making it difficult to segment them using only gray- scale and texture information. However, these objects can be visually recognized by their unique shapes and sizes. In this paper we discuss, what we believe to be, a novel, simple scheme for extracting features based on regional shapes. To test the effectiveness of these features for image segmentation (classification), we use an artificial neural network and a statistical cluster analysis technique. The proposed shape-based feature extraction algorithm computes regional shape vectors (RSVs) for all pixels that meet a certain threshold criteria. The distance from each such pixel to a boundary is computed in 8 directions (or in 26 directions for a 3-D image). Together, these 8 (or 26) values represent the pixel's (or voxel's) RSV. All RSVs from an image are used to train a multi-layered perceptron neural network which uses these features to 'learn' a suitable classification strategy. To clearly distinguish the desired object from other objects within an image, several examples from inside and outside the desired object are used for training. Several examples are presented to illustrate the strengths and weaknesses of our algorithm. Both synthetic and actual biomedical images are considered. Future extensions to this algorithm are also discussed.
NASA Astrophysics Data System (ADS)
Ray, Aniruddha; Ho, Ha; Daloglu, Mustafa; Torres, Avee; McLeod, Euan; Ozcan, Aydogan
2017-03-01
Herpes is one of the most widespread sexually transmitted viral diseases. Timely detection of Herpes Simplex Virus (HSV) can help prevent the rampant spreading of the virus. Current detection techniques such as viral culture, immuno-assays or Polymerase-Chain-Reaction, are time extensive and require expert handling. Here we present a field-portable, easy-to-use, and cost-effective biosensor for the detection of HSV based on holographic imaging. The virus is first captured from a target solution onto specifically developed substrates, prepared by coating glass coverslips with HSV-specific antibodies, and imaged using a lensfree holographic microscope. Several light-emitting-diodes (LEDs), coupled to multi-mode optical-fibers, are used to illuminate the sample containing the viruses. A micro-controller is used to activate the LEDs one at a time and in-line holograms are recorded using a CMOS imager placed immediately above the substrate. These sub-pixel shifted holograms are used to generate a super-resolved hologram, which is reconstructed to obtain the phase and amplitude images of the viruses. The signal of the viruses is enhanced using self-assembled PEG-based nanolenses, formed around the viral particles. Based on the phase information of the reconstructed images we can estimate the size of the viral particles, with an accuracy of +/- 11 nm, as well as quantify the viral load. The limit-of-detection of this system is estimated to be <500 viral copies per 100 μL sample volume that is imaged over 30 mm^2 field-of-view. This holographic microscopy based biosensor is label-free, cost-effective and field-portable, providing results in 2 hours, including sample preparation and imaging time.
Assessing Mesoscale Volcanic Aviation Hazards using ASTER
NASA Astrophysics Data System (ADS)
Pieri, D.; Gubbels, T.; Hufford, G.; Olsson, P.; Realmuto, V.
2006-12-01
The Advanced Spaceborne Thermal Emission and Reflection (ASTER) imager onboard the NASA Terra Spacecraft is a joint project of the Japanese Ministry for Economy, Trade, and Industry (METI) and NASA. ASTER has acquired over one million multi-spectral 60km by 60 km images of the earth over the last six years. It consists of three sub-instruments: (a) a four channel VNIR (0.52-0.86um) imager with a spatial resolution of 15m/pixel, including three nadir-viewing bands (1N, 2N, 3N) and one repeated rear-viewing band (3B) for stereo-photogrammetric terrain reconstruction (8-12m vertical resolution); (b) a SWIR (1.6-2.43um) imager with six bands at 30m/pixel; and (c) a TIR (8.125-11.65um) instrument with five bands at 90m/pixel. Returned data are processed in Japan at the Earth Remote Sensing Data Analysis Center (ERSDAC) and at the Land Processes Distributed Active Archive Center (LP DAAC), located at the USGS Center for Earth Resource Observation and Science (EROS) in Sioux Falls, South Dakota. Within the ASTER Project, the JPL Volcano Data Acquisition and Analyses System (VDAAS) houses over 60,000 ASTER volcano images of 1542 volcanoes worldwide and will be accessible for downloads by the general public and on-line image analyses by researchers in early 2007. VDAAS multi-spectral thermal infrared (TIR) de-correlation stretch products are optimized for volcanic ash detection and have a spatial resolution of 90m/pixel. Digital elevation models (DEM) stereo-photogrammetrically derived from ASTER Band 3B/3N data are also available within VDAAS at 15 and 30m/pixel horizontal resolution. Thus, ASTER visible, IR, and DEM data at 15-100m/pixel resolution within VDAAS can be combined to provide useful boundary conditions on local volcanic eruption plume location, composition, and altitude, as well as on topography of underlying terrain. During and after eruptions, low- altitude winds and ash transport can be affected by topography, and other orographic thermal and water vapor transport effects from the micro (<1km) to mesoscale (1-100km). Such phenomena are thus well-observed by ASTER and pose transient and severe hazards to aircraft operating in and out of airports near volcanoes (e.g., Anchorage, AK, USA; Catania, Italy; Kagoshima City, Japan). ASTER image data and derived products provide boundary conditions for 3D mesoscale atmospheric transport and chemistry models (e.g., RAMS) for retrospective and prospective studies of volcanic aerosol transport at low altitudes in takeoff and landing corridors near active volcanoes. Putative ASTER direct downlinks in the future could provide real-time mitigation of such hazards. Some examples of mesoscale analyses for threatened airspace near US and non- US airports will be shown. This work was, in part, carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to the NASA Earth Science Research Program and as part of ASTER Science Team activities.
Color lensless digital holographic microscopy with micrometer resolution.
Garcia-Sucerquia, Jorge
2012-05-15
Color digital lensless holographic microscopy with micrometer resolution is presented. Multiwavelength illumination of a biological sample and a posteriori color composition of the amplitude images individually reconstructed are used to obtain full-color representation of the microscopic specimen. To match the sizes of the reconstructed holograms for each wavelength, a reconstruction algorithm that allows for choosing the pixel size at the reconstruction plane independently of the wavelength and the reconstruction distance is used. The method is illustrated with experimental results.
Haney, C R; Fan, X; Markiewicz, E; Mustafi, D; Karczmar, G S; Stadler, W M
2013-02-01
Sorafenib is a multi-kinase inhibitor that blocks cell proliferation and angiogenesis. It is currently approved for advanced hepatocellular and renal cell carcinomas in humans, where its major mechanism of action is thought to be through inhibition of vascular endothelial growth factor and platelet-derived growth factor receptors. The purpose of this study was to determine whether pixel-by-pixel analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is better able to capture the heterogeneous response of Sorafenib in a murine model of colorectal tumor xenografts (as compared with region of interest analysis). MRI was performed on a 9.4 T pre-clinical scanner on the initial treatment day. Then either vehicle or drug were gavaged daily (3 days) up to the final image. Four days later, the mice were again imaged. The two-compartment model and reference tissue method of DCE-MRI were used to analyze the data. The results demonstrated that the contrast agent distribution rate constant (K(trans)) were significantly reduced (p < 0.005) at day-4 of Sorafenib treatment. In addition, the K(trans) of nearby muscle was also reduced after Sorafenib treatment. The pixel-by-pixel analysis (compared to region of interest analysis) was better able to capture the heterogeneity of the tumor and the decrease in K(trans) four days after treatment. For both methods, the volume of the extravascular extracellular space did not change significantly after treatment. These results confirm that parameters such as K(trans), could provide a non-invasive biomarker to assess the response to anti-angiogenic therapies such as Sorafenib, but that the heterogeneity of response across a tumor requires a more detailed analysis than has typically been undertaken.
Scattered light in a DMD based multi-object spectrometer
NASA Astrophysics Data System (ADS)
Fourspring, Kenneth D.; Ninkov, Zoran; Kerekes, John P.
2010-07-01
The DMD (Digital Micromirror Device) has an important future in both ground and space based multi-object spectrometers. A series of laboratory measurements have been performed to determine the scattered light properties of a DMD. The DMD under test had a 17 μm pitch and 1 μm gap between adjacent mirrors. Prior characterization of this device has focused on its use in DLP (TI Digital Light Processing) projector applications in which a whole pixel is illuminated by a uniform collimated source. The purpose of performing these measurements is to determine the limiting signal to noise ratio when utilizing the DMD as a slit mask in a spectrometer. The DMD pixel was determined to scatter more around the pixel edge and central via, indicating the importance of matching the telescope point spread function to the DMD. Also, the generation of DMD tested here was determined to have a significant mirror curvature. A maximum contrast ratio was determined at several wavelengths. Further measurements are underway on a newer generation DMD device, which has a smaller mirror pitch and likely different scatter characteristics. A previously constructed instrument, RITMOS (RIT Multi-Object Spectrometer) will be used to validate these scatter models and signal to noise ratio predications through imaging a star field.
MicrOmega: a VIS/NIR hyperspectral microscope for in situ analysis in space
NASA Astrophysics Data System (ADS)
Leroi, V.; Bibring, J. P.; Berthé, M.
2008-07-01
MicrOmega is an ultra miniaturized spectral microscope for in situ analysis of samples. It is composed of 2 microscopes: one with a spatial sampling of 5 μm, working in 4 color in the visible range and one NIR hyperspectral microscope in the spectral range 0.9-4 μm with a spatial sampling of 20 μm per pixel (described in this paper). MicrOmega/NIR illuminates and images samples a few mm in size and acquires the NIR spectrum of each resolved pixel in up to 600 contiguous spectral channels. The goal of this instrument is to analyse in situ the composition of collected samples at almost their grain size scale, in a non destructive way. It should be among the first set of instruments who will analyse the sample and enable other complementary analyses to be performed on it. With the spectral range and resolution chosen, a wide variety of constituents can be identified: minerals, such as pyroxene and olivine, ferric oxides, hydrated phyllosilicates, sulfates and carbonates; ices and organics. The composition of the various phases within a given sample is a critical record of its formation and evolution. Coupled to the mapping information, it provides unique clues to describe the history of the parent body. In particular, the capability to identify hydrated grains and to characterize their adjacent phases has a huge potential in the search for potential bio-relics. We will present the major instrumental principles and specifications of MicrOmega/NIR, and its expected performances in particular for the ESA/ExoMars Mission.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
Shamwell, E Jared; Nothwang, William D; Perlis, Donald
2018-05-04
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
NASA Astrophysics Data System (ADS)
van Roosmalen, Jarno; Beekman, Freek J.; Goorden, Marlies C.
2018-01-01
Imaging of 99mTc-labelled tracers is gaining popularity for detecting breast tumours. Recently, we proposed a novel design for molecular breast tomosynthesis (MBT) based on two sliding focusing multi-pinhole collimators that scan a modestly compressed breast. Simulation studies indicate that MBT has the potential to improve the tumour-to-background contrast-to-noise ratio significantly over state-of-the-art planar molecular breast imaging. The aim of the present paper is to optimize the collimator-detector geometry of MBT. Using analytical models, we first optimized sensitivity at different fixed system resolutions (ranging from 5 to 12 mm) by tuning the pinhole diameters and the distance between breast and detector for a whole series of automatically generated multi-pinhole designs. We evaluated both MBT with a conventional continuous crystal detector with 3.2 mm intrinsic resolution and with a pixelated detector with 1.6 mm pixels. Subsequently, full system simulations of a breast phantom containing several lesions were performed for the optimized geometry at each system resolution for both types of detector. From these simulations, we found that tumour-to-background contrast-to-noise ratio was highest for systems in the 7 mm-10 mm system resolution range over which it hardly varied. No significant differences between the two detector types were found.
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Takenaka, H.; Higurashi, A.; Nakajima, T.
2017-12-01
Aerosol in the atmosphere is an important constituent for determining the earth's radiation budget, so the accurate aerosol retrievals from satellite is useful. We have developed a satellite remote sensing algorithm to retrieve the aerosol optical properties using multi-wavelength and multi-pixel information of satellite imagers (MWPM). The method simultaneously derives aerosol optical properties, such as aerosol optical thickness (AOT), single scattering albedo (SSA) and aerosol size information, by using spatial difference of wavelegths (multi-wavelength) and surface reflectances (multi-pixel). The method is useful for aerosol retrieval over spatially heterogeneous surface like an urban region. In this algorithm, the inversion method is a combination of an optimal method and smoothing constraint for the state vector. Furthermore, this method has been combined with the direct radiation transfer calculation (RTM) numerically solved by each iteration step of the non-linear inverse problem, without using look up table (LUT) with several constraints. However, it takes too much computation time. To accelerate the calculation time, we replaced the RTM with an accelerated RTM solver learned by neural network-based method, EXAM (Takenaka et al., 2011), using Rster code. And then, the calculation time was shorternd to about one thouthandth. We applyed MWPM combined with EXAM to GOSAT/TANSO-CAI (Cloud and Aerosol Imager). CAI is a supplement sensor of TANSO-FTS, dedicated to measure cloud and aerosol properties. CAI has four bands, 380, 674, 870 and 1600 nm, and observes in 500 meters resolution for band1, band2 and band3, and 1.5 km for band4. Retrieved parameters are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles at a wavelenth of 500nm, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength by combining a minimum reflectance method and Fukuda et al. (2013). We will show the results and discuss the accuracy of the algorithm for various surface types. Our future work is to extend the algorithm for analysis of GOSAT-2/TANSO-CAI-2 and GCOM/C-SGLI data.
Planarity constrained multi-view depth map reconstruction for urban scenes
NASA Astrophysics Data System (ADS)
Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie
2018-05-01
Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Hard x-ray phase contrastmicroscopy - techniques and applications
NASA Astrophysics Data System (ADS)
Holzner, Christian
In 1918, Einstein provided the first description of the nature of the refractive index for X-rays, showing that phase contrast effects are significant. A century later, most x-ray microscopy and nearly all medical imaging remains based on absorption contrast, even though phase contrast offers orders of magnitude improvements in contrast and reduced radiation exposure at multi-keV x-ray energies. The work presented is concerned with developing practical and quantitative methods of phase contrast for x-ray microscopy. A theoretical framework for imaging in phase contrast is put forward; this is used to obtain quantitative images in a scanning microscope using a segmented detector, and to correct for artifacts in a commercial phase contrast x-ray nano-tomography system. The principle of reciprocity between scanning and full-field microscopes is then used to arrive at a novel solution: Zernike contrast in a scanning microscope. These approaches are compared on a theoretical and experimental basis in direct connection with applications using multi-keV x-ray microscopes at the Advanced Photon Source at Argonne National Laboratory. Phase contrast provides the best means to image mass and ultrastructure of light elements that mainly constitute biological matter, while stimulated x-ray fluorescence provides high sensitivity for studies of the distribution of heavier trace elements, such as metals. These approaches are combined in a complementary way to yield quantitative maps of elemental concentration from 2D images, with elements placed in their ultrastructural context. The combination of x-ray fluorescence and phase contrast poses an ideal match for routine, high resolution tomographic imaging of biological samples in the future. The presented techniques and demonstration experiments will help pave the way for this development.
Rzeczycki, Phillip; Yoon, Gi Sang; Keswani, Rahul K.; Sud, Sudha; Stringer, Kathleen A.; Rosania, Gus R.
2017-01-01
Following prolonged administration, certain orally bioavailable but poorly soluble small molecule drugs are prone to precipitate out and form crystal-like drug inclusions (CLDIs) within the cells of living organisms. In this research, we present a quantitative multi-parameter imaging platform for measuring the fluorescence and polarization diattenuation signals of cells harboring intracellular CLDIs. To validate the imaging system, the FDA-approved drug clofazimine (CFZ) was used as a model compound. Our results demonstrated that a quantitative multi-parameter microscopy image analysis platform can be used to study drug sequestering macrophages, and to detect the formation of ordered molecular aggregates formed by poorly soluble small molecule drugs in animals. PMID:28270989
Rzeczycki, Phillip; Yoon, Gi Sang; Keswani, Rahul K; Sud, Sudha; Stringer, Kathleen A; Rosania, Gus R
2017-02-01
Following prolonged administration, certain orally bioavailable but poorly soluble small molecule drugs are prone to precipitate out and form crystal-like drug inclusions (CLDIs) within the cells of living organisms. In this research, we present a quantitative multi-parameter imaging platform for measuring the fluorescence and polarization diattenuation signals of cells harboring intracellular CLDIs. To validate the imaging system, the FDA-approved drug clofazimine (CFZ) was used as a model compound. Our results demonstrated that a quantitative multi-parameter microscopy image analysis platform can be used to study drug sequestering macrophages, and to detect the formation of ordered molecular aggregates formed by poorly soluble small molecule drugs in animals.
Study on polarized optical flow algorithm for imaging bionic polarization navigation micro sensor
NASA Astrophysics Data System (ADS)
Guan, Le; Liu, Sheng; Li, Shi-qi; Lin, Wei; Zhai, Li-yuan; Chu, Jin-kui
2018-05-01
At present, both the point source and the imaging polarization navigation devices only can output the angle information, which means that the velocity information of the carrier cannot be extracted from the polarization field pattern directly. Optical flow is an image-based method for calculating the velocity of pixel point movement in an image. However, for ordinary optical flow, the difference in pixel value as well as the calculation accuracy can be reduced in weak light. Polarization imaging technology has the ability to improve both the detection accuracy and the recognition probability of the target because it can acquire the extra polarization multi-dimensional information of target radiation or reflection. In this paper, combining the polarization imaging technique with the traditional optical flow algorithm, a polarization optical flow algorithm is proposed, and it is verified that the polarized optical flow algorithm has good adaptation in weak light and can improve the application range of polarization navigation sensors. This research lays the foundation for day and night all-weather polarization navigation applications in future.
NASA Astrophysics Data System (ADS)
Luukanen, A.; Grönberg, L.; Helistö, P.; Penttilä, J. S.; Seppä, H.; Sipola, H.; Dietlein, C. R.; Grossman, E. N.
2006-05-01
The temperature resolving power (NETD) of millimeter wave imagers based on InP HEMT MMIC radiometers is typically about 1 K (30 ms), but the MMIC technology is limited to operating frequencies below ~ 150 GHz. In this paper we report the first results from a pixel developed for an eight pixel sub-array of superconducting antenna-coupled microbolometers, a first step towards a real-time imaging system, with frequency coverage of 0.2 - 3.6 THz. These detectors have demonstrated video-rate NETDs in the millikelvin range, close to the fundamental photon noise limit, when operated at a bath temperature of ~ 4K. The detectors will be operated within a turn-key cryogen-free pulse tube refrigerator, which allows for continuous operation without the need for liquid cryogens. The outstanding frequency agility of bolometric detectors allows for multi-frequency imaging, which greatly enhances the discrimination of e.g. explosives against innoncuous items concealed underneath clothing.
Fuzzy entropy thresholding and multi-scale morphological approach for microscopic image enhancement
NASA Astrophysics Data System (ADS)
Zhou, Jiancan; Li, Yuexiang; Shen, Linlin
2017-07-01
Microscopic images provide lots of useful information for modern diagnosis and biological research. However, due to the unstable lighting condition during image capturing, two main problems, i.e., high-level noises and low image contrast, occurred in the generated cell images. In this paper, a simple but efficient enhancement framework is proposed to address the problems. The framework removes image noises using a hybrid method based on wavelet transform and fuzzy-entropy, and enhances the image contrast with an adaptive morphological approach. Experiments on real cell dataset were made to assess the performance of proposed framework. The experimental results demonstrate that our proposed enhancement framework increases the cell tracking accuracy to an average of 74.49%, which outperforms the benchmark algorithm, i.e., 46.18%.
Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H
2012-12-01
This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph
2006-01-01
PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.
The Athena Microscopic Imager on the Mars Exploration Rovers
NASA Astrophysics Data System (ADS)
Herkenhoff, K. E.; Squyres, S. W.; Bell, J. F.; Maki, J. N.; Schwochert, M. A.
2002-12-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). Technically speaking, the ''microscopic'' imager is not a microscope: it has a fixed magnification of 0.4, and is intended to produce images that simulate a geologist's view when using a common hand lens. The MI uses the same electronics design as the other MER cameras, but has optics that yield a field of view of 31 x 31 mm. The MI will acquire images using only solar or skylight illumination of the target surface. A contact sensor will be used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Because the MI has a relatively small depth of field (+/- 3 mm), a single MI image of a rough surface will contain both focused and unfocused areas. Coarse (~2 mm precision) focusing will be achieved by moving the IDD away from a target after the contact sensor is activated. Multiple images taken at various distances will be acquired to ensure good focus on all parts of rough surfaces. By combining a set of images acquired in this way, a completely focused image will be assembled. The MI optics will be protected from the martian environment by a dust cover. The dust cover includes a polycarbonate window that is tinted yellow to restrict the spectral bandpass to 500-700 nm and allow color information to be obtained by taking images with the dust cover open and closed. The MI will be used to image the same materials measured by other Athena instruments, as well as targets of opportunity (before rover traverses). The resulting images will be used to place other instrumental data in context and to aid in the petrologic interpretation of rocks and soils on Mars.
Removal of anti-Stokes emission background in STED microscopy by FPGA-based synchronous detection
NASA Astrophysics Data System (ADS)
Castello, M.; Tortarolo, G.; Coto Hernández, I.; Deguchi, T.; Diaspro, A.; Vicidomini, G.
2017-05-01
In stimulated emission depletion (STED) microscopy, the role of the STED beam is to de-excite, via stimulated emission, the fluorophores that have been previously excited by the excitation beam. This condition, together with specific beam intensity distributions, allows obtaining true sub-diffraction spatial resolution images. However, if the STED beam has a non-negligible probability to excite the fluorophores, a strong fluorescent background signal (anti-Stokes emission) reduces the effective resolution. For STED scanning microscopy, different synchronous detection methods have been proposed to remove this anti-Stokes emission background and recover the resolution. However, every method works only for a specific STED microscopy implementation. Here we present a user-friendly synchronous detection method compatible with any STED scanning microscope. It exploits a data acquisition (DAQ) card based on a field-programmable gate array (FPGA), which is progressively used in STED microscopy. In essence, the FPGA-based DAQ card synchronizes the fluorescent signal registration, the beam deflection, and the excitation beam interruption, providing a fully automatic pixel-by-pixel synchronous detection method. We validate the proposed method in both continuous wave and pulsed STED microscope systems.
Zhao, Ming; Li, Yu; Peng, Leilei
2014-01-01
We present a novel excitation-emission multiplexed fluorescence lifetime microscopy (FLIM) method that surpasses current FLIM techniques in multiplexing capability. The method employs Fourier multiplexing to simultaneously acquire confocal fluorescence lifetime images of multiple excitation wavelength and emission color combinations at 44,000 pixels/sec. The system is built with low-cost CW laser sources and standard PMTs with versatile spectral configuration, which can be implemented as an add-on to commercial confocal microscopes. The Fourier lifetime confocal method allows fast multiplexed FLIM imaging, which makes it possible to monitor multiple biological processes in live cells. The low cost and compatibility with commercial systems could also make multiplexed FLIM more accessible to biological research community. PMID:24921725
Image-based red cell counting for wild animals blood.
Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia
2010-01-01
An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.
Detection of Multi-Layer and Vertically-Extended Clouds Using A-Train Sensors
NASA Technical Reports Server (NTRS)
Joiner, J.; Vasilkov, A. P.; Bhartia, P. K.; Wind, G.; Platnick, S.; Menzel, W. P.
2010-01-01
The detection of mUltiple cloud layers using satellite observations is important for retrieval algorithms as well as climate applications. In this paper, we describe a relatively simple algorithm to detect multiple cloud layers and distinguish them from vertically-extended clouds. The algorithm can be applied to coincident passive sensors that derive both cloud-top pressure from the thermal infrared observations and an estimate of solar photon pathlength from UV, visible, or near-IR measurements. Here, we use data from the A-train afternoon constellation of satellites: cloud-top pressure, cloud optical thickness, the multi-layer flag from the Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) and the optical centroid cloud pressure from the Aura Ozone Monitoring Instrument (OMI). For the first time, we use data from the CloudSat radar to evaluate the results of a multi-layer cloud detection scheme. The cloud classification algorithms applied with different passive sensor configurations compare well with each other as well as with data from CloudSat. We compute monthly mean fractions of pixels containing multi-layer and vertically-extended clouds for January and July 2007 at the OMI spatial resolution (l2kmx24km at nadir) and at the 5kmx5km MODIS resolution used for infrared cloud retrievals. There are seasonal variations in the spatial distribution of the different cloud types. The fraction of cloudy pixels containing distinct multi-layer cloud is a strong function of the pixel size. Globally averaged, these fractions are approximately 20% and 10% for OMI and MODIS, respectively. These fractions may be significantly higher or lower depending upon location. There is a much smaller resolution dependence for fractions of pixels containing vertically-extended clouds (approx.20% for OMI and slightly less for MODIS globally), suggesting larger spatial scales for these clouds. We also find higher fractions of vertically-extended clouds over land as compared with ocean, particularly in the tropics and summer hemisphere.
High-Definition Infrared Spectroscopic Imaging
Reddy, Rohith K.; Walsh, Michael J.; Schulmerich, Matthew V.; Carney, P. Scott; Bhargava, Rohit
2013-01-01
The quality of images from an infrared (IR) microscope has traditionally been limited by considerations of throughput and signal-to-noise ratio (SNR). An understanding of the achievable quality as a function of instrument parameters, from first principals is needed for improved instrument design. Here, we first present a model for light propagation through an IR spectroscopic imaging system based on scalar wave theory. The model analytically describes the propagation of light along the entire beam path from the source to the detector. The effect of various optical elements and the sample in the microscope is understood in terms of the accessible spatial frequencies by using a Fourier optics approach and simulations are conducted to gain insights into spectroscopic image formation. The optimal pixel size at the sample plane is calculated and shown much smaller than that in current mid-IR microscopy systems. A commercial imaging system is modified, and experimental data are presented to demonstrate the validity of the developed model. Building on this validated theoretical foundation, an optimal sampling configuration is set up. Acquired data were of high spatial quality but, as expected, of poorer SNR. Signal processing approaches were implemented to improve the spectral SNR. The resulting data demonstrated the ability to perform high-definition IR imaging in the laboratory by using minimally-modified commercial instruments. PMID:23317676
High-definition infrared spectroscopic imaging.
Reddy, Rohith K; Walsh, Michael J; Schulmerich, Matthew V; Carney, P Scott; Bhargava, Rohit
2013-01-01
The quality of images from an infrared (IR) microscope has traditionally been limited by considerations of throughput and signal-to-noise ratio (SNR). An understanding of the achievable quality as a function of instrument parameters, from first principals is needed for improved instrument design. Here, we first present a model for light propagation through an IR spectroscopic imaging system based on scalar wave theory. The model analytically describes the propagation of light along the entire beam path from the source to the detector. The effect of various optical elements and the sample in the microscope is understood in terms of the accessible spatial frequencies by using a Fourier optics approach and simulations are conducted to gain insights into spectroscopic image formation. The optimal pixel size at the sample plane is calculated and shown much smaller than that in current mid-IR microscopy systems. A commercial imaging system is modified, and experimental data are presented to demonstrate the validity of the developed model. Building on this validated theoretical foundation, an optimal sampling configuration is set up. Acquired data were of high spatial quality but, as expected, of poorer SNR. Signal processing approaches were implemented to improve the spectral SNR. The resulting data demonstrated the ability to perform high-definition IR imaging in the laboratory by using minimally-modified commercial instruments.
NASA Astrophysics Data System (ADS)
Uygur, Merve; Karaman, Muhittin; Kumral, Mustafa
2016-04-01
Çürüksu (Denizli) Graben hosts various geothermal fields such as Kızıldere, Yenice, Gerali, Karahayıt, and Tekkehamam. Neotectonic activities, which are caused by extensional tectonism, and deep circulation in sub-volcanic intrusions are heat sources of hydrothermal solutions. The temperature of hydrothermal solutions is between 53 and 260 degree Celsius. Phyllic, argillic, silicic, and carbonatization alterations and various hydrothermal minerals have been identified in various research studies of these areas. Surfaced hydrothermal alteration minerals are one set of potential indicators of geothermal resources. Developing the exploration tools to define the surface indicators of geothermal fields can assist in the recognition of geothermal resources. Thermal and hyperspectral imaging and analysis can be used for defining the surface indicators of geothermal fields. This study tests the hypothesis that hyperspectral image analysis based on EO-1 Hyperion images can be used for the delineation and definition of surfaced hydrothermal alteration in geothermal fields. Hyperspectral image analyses were applied to images covering the geothermal fields whose alteration characteristic are known. To reduce data dimensionality and identify spectral endmembers, Kruse's multi-step process was applied to atmospherically and geometrically-corrected hyperspectral images. Minimum Noise Fraction was used to reduce the spectral dimensions and isolate noise in the images. Extreme pixels were identified from high order MNF bands using the Pixel Purity Index. n-Dimensional Visualization was utilized for unique pixel identification. Spectral similarities between pixel spectral signatures and known endmember spectrum (USGS Spectral Library) were compared with Spectral Angle Mapper Classification. EO-1 Hyperion hyperspectral images and hyperspectral analysis are sensitive to hydrothermal alteration minerals, as their diagnostic spectral signatures span the visible and shortwave infrared seen in geothermal fields. Hyperspectral analysis results indicated that kaolinite, smectite, illite, montmorillonite, and sepiolite minerals were distributed in a wide area, which covered the hot spring outlet. Rectorite, lizardite, richterite, dumortierite, nontronite, erionite, and clinoptilolite were observed occasionally.
Sub-pixel image classification for forest types in East Texas
NASA Astrophysics Data System (ADS)
Westbrook, Joey
Sub-pixel classification is the extraction of information about the proportion of individual materials of interest within a pixel. Landcover classification at the sub-pixel scale provides more discrimination than traditional per-pixel multispectral classifiers for pixels where the material of interest is mixed with other materials. It allows for the un-mixing of pixels to show the proportion of each material of interest. The materials of interest for this study are pine, hardwood, mixed forest and non-forest. The goal of this project was to perform a sub-pixel classification, which allows a pixel to have multiple labels, and compare the result to a traditional supervised classification, which allows a pixel to have only one label. The satellite image used was a Landsat 5 Thematic Mapper (TM) scene of the Stephen F. Austin Experimental Forest in Nacogdoches County, Texas and the four cover type classes are pine, hardwood, mixed forest and non-forest. Once classified, a multi-layer raster datasets was created that comprised four raster layers where each layer showed the percentage of that cover type within the pixel area. Percentage cover type maps were then produced and the accuracy of each was assessed using a fuzzy error matrix for the sub-pixel classifications, and the results were compared to the supervised classification in which a traditional error matrix was used. The overall accuracy of the sub-pixel classification using the aerial photo for both training and reference data had the highest (65% overall) out of the three sub-pixel classifications. This was understandable because the analyst can visually observe the cover types actually on the ground for training data and reference data, whereas using the FIA (Forest Inventory and Analysis) plot data, the analyst must assume that an entire pixel contains the exact percentage of a cover type found in a plot. An increase in accuracy was found after reclassifying each sub-pixel classification from nine classes with 10 percent interval each to five classes with 20 percent interval each. When compared to the supervised classification which has a satisfactory overall accuracy of 90%, none of the sub-pixel classification achieved the same level. However, since traditional per-pixel classifiers assign only one label to pixels throughout the landscape while sub-pixel classifications assign multiple labels to each pixel, the traditional 85% accuracy of acceptance for pixel-based classifications should not apply to sub-pixel classifications. More research is needed in order to define the level of accuracy that is deemed acceptable for sub-pixel classifications.
Pulsed holographic system for imaging through spatially extended scattering media
NASA Astrophysics Data System (ADS)
Kanaev, A. V.; Judd, K. P.; Lebow, P.; Watnik, A. T.; Novak, K. M.; Lindle, J. R.
2017-10-01
Imaging through scattering media is a highly sought capability for military, industrial, and medical applications. Unfortunately, nearly all recent progress was achieved in microscopic light propagation and/or light propagation through thin or weak scatterers which is mostly pertinent to medical research field. Sensing at long ranges through extended scattering media, for example turbid water or dense fog, still represents significant challenge and the best results are demonstrated using conventional approaches of time- or range-gating. The imaging range of such systems is constrained by their ability to distinguish a few ballistic photons that reach the detector from the background, scattered, and ambient photons, as well as from detector noise. Holography can potentially enhance time-gating by taking advantage of extra signal filtering based on coherence properties of the ballistic photons as well as by employing coherent addition of multiple frames. In a holographic imaging scheme ballistic photons of the imaging pulse are reflected from a target and interfered with the reference pulse at the detector creating a hologram. Related approaches were demonstrated previously in one-way imaging through thin biological samples and other microscopic scale scatterers. In this work, we investigate performance of holographic imaging systems under conditions of extreme scattering (less than one signal photon per pixel signal), demonstrate advantages of coherent addition of images recovered from holograms, and discuss image quality dependence on the ratio of the signal and reference beam power.
3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse
Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.
2009-01-01
We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166
Lhuaire, Martin; Martinez, Agathe; Kaplan, Hervé; Nuzillard, Jean-Marc; Renard, Yohann; Tonnelet, Romain; Braun, Marc; Avisse, Claude; Labrousse, Marc
2014-12-01
Technological advances in the field of biological imaging now allow multi-modal studies of human embryo anatomy. The aim of this study was to assess the high magnetic field μMRI feasibility in the study of small human embryos (less than 21mm crown-rump) as a new tool for the study of human descriptive embryology and to determine better sequence characteristics to obtain higher spatial resolution and higher signal/noise ratio. Morphological study of four human embryos belonging to the historical collection of the Department of Anatomy in the Faculty of Medicine of Reims was undertaken by μMRI. These embryos had, successively, crown-rump lengths of 3mm (Carnegie Stage, CS 10), 12mm (CS 16), 17mm (CS 18) and 21mm (CS 20). Acquisition of images was performed using a vertical nuclear magnetic resonance spectrometer, a Bruker Avance III, 500MHz, 11.7T equipped for imaging. All images were acquired using 2D (transverse, sagittal and coronal) and 3D sequences, either T1-weighted or T2-weighted. Spatial resolution between 24 and 70μm/pixel allowed clear visualization of all anatomical structures of the embryos. The study of human embryos μMRI has already been reported in the literature and a few atlases exist for educational purposes. However, to our knowledge, descriptive or morphological studies of human developmental anatomy based on data collected these few μMRI studies of human embryos are rare. This morphological noninvasive imaging method coupled with other techniques already reported seems to offer new perspectives to descriptive studies of human embryology.
Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.
2013-01-01
The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.
Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K
2016-07-20
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.
Design of a concise Féry-prism hyperspectral imaging system based on multi-configuration
NASA Astrophysics Data System (ADS)
Dong, Wei; Nie, Yun-feng; Zhou, Jin-song
2013-08-01
In order to meet the needs of space borne and airborne hyperspectral imaging system for light weight, simplification and high spatial resolution, a novel design of Féry-prism hyperspectral imaging system based on Zemax multi-configuration method is presented. The novel structure is well arranged by analyzing optical monochromatic aberrations theoretically, and the optical structure of this design is concise. The fundamental of this design is Offner relay configuration, whereas the secondary mirror is replaced by Féry-prism with curved surfaces and a reflective front face. By reflection, the light beam passes through the Féry-prism twice, which promotes spectral resolution and enhances image quality at the same time. The result shows that the system can achieve light weight and simplification, compared to other hyperspectral imaging systems. Composed of merely two spherical mirrors and one achromatized Féry-prism to perform both dispersion and imaging functions, this structure is concise and compact. The average spectral resolution is 6.2nm; The MTFs for 0.45~1.00um spectral range are greater than 0.75, RMSs are less than 2.4um; The maximal smile is less than 10% pixel, while the keystones is less than 2.8% pixel; image quality approximates the diffraction limit. The design result shows that hyperspectral imaging system with one modified Féry-prism substituting the secondary mirror of Offner relay configuration is feasible from the perspective of both theory and practice, and possesses the merits of simple structure, convenient optical alignment, and good image quality, high resolution in space and spectra, adjustable dispersive nonlinearity. The system satisfies the requirements of airborne or space borne hyperspectral imaging system.
Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo
2017-01-01
Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
Multimodal Spectral Imaging of Cells Using a Transmission Diffraction Grating on a Light Microscope
Isailovic, Dragan; Xu, Yang; Copus, Tyler; Saraswat, Suraj; Nauli, Surya M.
2011-01-01
A multimodal methodology for spectral imaging of cells is presented. The spectral imaging setup uses a transmission diffraction grating on a light microscope to concurrently record spectral images of cells and cellular organelles by fluorescence, darkfield, brightfield, and differential interference contrast (DIC) spectral microscopy. Initially, the setup was applied for fluorescence spectral imaging of yeast and mammalian cells labeled with multiple fluorophores. Fluorescence signals originating from fluorescently labeled biomolecules in cells were collected through triple or single filter cubes, separated by the grating, and imaged using a charge-coupled device (CCD) camera. Cellular components such as nuclei, cytoskeleton, and mitochondria were spatially separated by the fluorescence spectra of the fluorophores present in them, providing detailed multi-colored spectral images of cells. Additionally, the grating-based spectral microscope enabled measurement of scattering and absorption spectra of unlabeled cells and stained tissue sections using darkfield and brightfield or DIC spectral microscopy, respectively. The presented spectral imaging methodology provides a readily affordable approach for multimodal spectral characterization of biological cells and other specimens. PMID:21639978
NASA Astrophysics Data System (ADS)
Jungmann-Smith, J. H.; Bergamaschi, A.; Cartier, S.; Dinapoli, R.; Greiffenberg, D.; Johnson, I.; Maliakal, D.; Mezza, D.; Mozzanica, A.; Ruder, Ch; Schaedler, L.; Schmitt, B.; Shi, X.; Tinti, G.
2014-12-01
JUNGFRAU (adJUstiNg Gain detector FoR the Aramis User station) is a two-dimensional pixel detector for photon science applications at free electron lasers and synchrotron light sources. It is developed for the SwissFEL currently under construction at the Paul Scherrer Institute, Switzerland. Characteristics of this application-specific integrating circuit readout chip include single photon sensitivity and low noise over a dynamic range of over four orders of magnitude of photon input signal. These characteristics are achieved by a three-fold gain-switching preamplifier in each pixel, which automatically adjusts its gain to the amount of charge deposited on the pixel. The final JUNGFRAU chip comprises 256 × 256 pixels of 75 × 75 μm2 each. Arrays of 2 × 4 chips are bump-bonded to monolithic detector modules of about 4 × 8 cm2. Multi-module systems up to 16 Mpixels are planned for the end stations at SwissFEL. A readout rate in excess of 2 kHz is anticipated, which serves the readout requirements of SwissFEL and enables high count rate synchrotron experiments with a linear count rate capability of > 20 MHz/pixel. Promising characterization results from a 3.6 × 3.6 mm2 prototype (JUNGFRAU 0.2) with fluorescence X-ray, infrared laser and synchrotron irradiation are shown. The results include an electronic noise as low as 100 electrons root-mean-square, which enables single photon detection down to X-ray energies of about 2 keV. Noise below the Poisson fluctuation of the photon number and a linearity error of the pixel response of about 1% are demonstrated. First imaging experiments successfully show automatic gain switching. The edge spread function of the imaging system proves to be comparable in quality to single photon counting hybrid pixel detectors.
Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery
NASA Astrophysics Data System (ADS)
Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.
2016-12-01
Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.
Panning artifacts in digital pathology images
NASA Astrophysics Data System (ADS)
Avanaki, Ali R. N.; Lanciault, Christian; Espig, Kathryn S.; Xthona, Albert; Kimpe, Tom R. L.
2017-03-01
In making a pathologic diagnosis, a pathologist uses cognitive processes: perception, attention, memory, and search (Pena and Andrade-Filho, 2009). Typically, this involves focus while panning from one region of a slide to another, using either a microscope in a traditional workflow or software program and display in a digital pathology workflow (DICOM Standard Committee, 2010). We theorize that during panning operation, the pathologist receives information important to diagnosis efficiency and/or correctness. As compared to an optical microscope, panning in a digital pathology image involves some visual artifacts due to the following: (i) the frame rate is finite; (ii) time varying visual signals are reconstructed using imperfect zero-order hold. Specifically, after pixel's digital drive is changed, it takes time for a pixel to emit the expected amount of light. Previous work suggests that 49% of navigation is conducted in low-power/overview with digital pathology (Molin et al., 2015), but the influence of display factors has not been measured. We conducted a reader study to establish a relationship between display frame rate, panel response time, and threshold panning speed (above which the artifacts become noticeable). Our results suggest visual tasks that involve tissue structure are more impacted by the simulated panning artifacts than those that only involve color (e.g., staining intensity estimation), and that the panning artifacts versus normalized panning speed has a peak behavior which is surprising and may change for a diagnostic task. This is work in progress and our final findings should be considered in designing future digital pathology systems.
Evaluating the capacity of GF-4 satellite data for estimating fractional vegetation cover
NASA Astrophysics Data System (ADS)
Zhang, C.; Qin, Q.; Ren, H.; Zhang, T.; Sun, Y.
2016-12-01
Fractional vegetation cover (FVC) is a crucial parameter for many agricultural, environmental, meteorological and ecological applications, which is of great importance for studies on ecosystem structure and function. The Chinese GaoFen-4 (GF-4) geostationary satellite designed for the purpose of environmental and ecological observation was launched in December 29, 2015, and official use has been started by Chinese Government on June 13, 2016. Multi-spectral images with spatial resolution of 50 m and high temporal resolution, could be acquired by the sensor on GF-4 satellite on the 36000 km-altitude orbit. To take full advantage of the outstanding performance of GF-4 satellite, this study evaluated the capacity of GF-4 satellite data for monitoring FVC. To the best of our knowledge, this is the first research about estimating FVC from GF-4 satellite images. First, we developed a procedure for preprocessing GF-4 satellite data, including radiometric calibration and atmospheric correction, to acquire surface reflectance. Then single image and multi-temporal images were used for extracting the endmembers of vegetation and soil, respectively. After that, dimidiate pixel model and square model based on vegetation indices were used for estimating FVC. Finally, the estimation results were comparatively analyzed with FVC estimated by other existing sensors. The experimental results showed that satisfying accuracy of FVC estimation could be achieved from GF-4 satellite images using dimidiate pixel model and square model based on vegetation indices. What's more, the multi-temporal images increased the probability to find pure vegetation and soil endmembers, thus the characteristic of high temporal resolution of GF-4 satellite images improved the accuracy of FVC estimation. This study demonstrated the capacity of GF-4 satellite data for monitoring FVC. The conclusions reached by this study are significant for improving the accuracy and spatial-temporal resolution of existing FVC products, which provides a basis for the studies on ecosystem structure and function using remote sensing data acquired by GF-4 satellite.
Multispectral Live-Cell Imaging.
Cohen, Sarah; Valm, Alex M; Lippincott-Schwartz, Jennifer
2018-06-01
Fluorescent proteins and vital dyes are invaluable tools for studying dynamic processes within living cells. However, the ability to distinguish more than a few different fluorescent reporters in a single sample is limited by the spectral overlap of available fluorophores. Here, we present a protocol for imaging live cells labeled with six fluorophores simultaneously. A confocal microscope with a spectral detector is used to acquire images, and linear unmixing algorithms are applied to identify the fluorophores present in each pixel of the image. We describe the application of this method to visualize the dynamics of six different organelles, and to quantify the contacts between organelles. However, this method can be used to image any molecule amenable to tagging with a fluorescent probe. Thus, multispectral live-cell imaging is a powerful tool for systems-level analysis of cellular organization and dynamics. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.
Software electron counting for low-dose scanning transmission electron microscopy.
Mittelberger, Andreas; Kramberger, Christian; Meyer, Jannik C
2018-05-01
The performance of the detector is of key importance for low-dose imaging in transmission electron microscopy, and counting every single electron can be considered as the ultimate goal. In scanning transmission electron microscopy, low-dose imaging can be realized by very fast scanning, however, this also introduces artifacts and a loss of resolution in the scan direction. We have developed a software approach to correct for artifacts introduced by fast scans, making use of a scintillator and photomultiplier response that extends over several pixels. The parameters for this correction can be directly extracted from the raw image. Finally, the images can be converted into electron counts. This approach enables low-dose imaging in the scanning transmission electron microscope via high scan speeds while retaining the image quality of artifact-free slower scans. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel
2012-10-01
The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.
NASA Technical Reports Server (NTRS)
Allen, Carlton; Sellar, Glenn; Nunez, Jorge; Mosie, Andrea; Schwarz, Carol; Parker, Terry; Winterhalter, Daniel; Farmer, Jack
2009-01-01
Astronauts on long-duration lunar missions will need the capability to high-grade their samples to select the highest value samples for transport to Earth and to leave others on the Moon. We are supporting studies to define the necessary and sufficient measurements and techniques for high-grading samples at a lunar outpost. A glovebox, dedicated to testing instruments and techniques for high-grading samples, is in operation at the JSC Lunar Experiment Laboratory. A reference suite of lunar rocks and soils, spanning the full compositional range found in the Apollo collection, is available for testing in this laboratory. Thin sections of these samples are available for direct comparison. The Lunar Sample Compendium, on-line at http://www-curator.jsc.nasa.gov/lunar/compendium.cfm, summarizes previous analyses of these samples. The laboratory, sample suite, and Compendium are available to the lunar research and exploration community. In the first test of possible instruments for lunar sample high-grading, we imaged 18 lunar rocks and four soils from the reference suite using the Multispectral Microscopic Imager (MMI) developed by Arizona State University and JPL (see Farmer et. al. abstract). The MMI is a fixed-focus digital imaging system with a resolution of 62.5 microns/pixel, a field size of 40 x 32 mm, and a depth-of-field of approximately 5 mm. Samples are illuminated sequentially by 21 light emitting diodes in discrete wavelengths spanning the visible to shortwave infrared. Measurements of reflectance standards and background allow calibration to absolute reflectance. ENVI-based software is used to produce spectra for specific minerals as well as multi-spectral images of rock textures.
Hinrichs, Ruth; Frank, Paulo Ricardo Ost; Vasconcellos, M A Z
2017-03-01
Modifications of cotton and polyester textiles due to shots fired at short range were analyzed with a variable pressure scanning electron microscope (VP-SEM). Different mechanisms of fiber rupture as a function of fiber type and shooting distance were detected, namely fusing, melting, scorching, and mechanical breakage. To estimate the firing distance, the approximately exponential decay of GSR coverage as a function of radial distance from the entrance hole was determined from image analysis, instead of relying on chemical analysis with EDX, which is problematic in the VP-SEM. A set of backscattered electron images, with sufficient magnification to discriminate micrometer wide GSR particles, was acquired at different radial distances from the entrance hole. The atomic number contrast between the GSR particles and the organic fibers allowed to find a robust procedure to segment the micrographs into binary images, in which the white pixel count was attributed to GSR coverage. The decrease of the white pixel count followed an exponential decay, and it was found that the reciprocal of the decay constant, obtained from the least-square fitting of the coverage data, showed a linear dependence on the shooting distance. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Evaluation of a hybrid pixel detector for electron microscopy.
Faruqi, A R; Cattermole, D M; Henderson, R; Mikulec, B; Raeburn, C
2003-04-01
We describe the application of a silicon hybrid pixel detector, containing 64 by 64 pixels, each 170 microm(2), in electron microscopy. The device offers improved resolution compared to CCDs along with faster and noiseless readout. Evaluation of the detector, carried out on a 120 kV electron microscope, demonstrates the potential of the device.
Implementation of a watershed algorithm on FPGAs
NASA Astrophysics Data System (ADS)
Zahirazami, Shahram; Akil, Mohamed
1998-10-01
In this article we present an implementation of a watershed algorithm on a multi-FPGA architecture. This implementation is based on an hierarchical FIFO. A separate FIFO for each gray level. The gray scale value of a pixel is taken for the altitude of the point. In this way we look at the image as a relief. We proceed by a flooding step. It's like as we immerse the relief in a lake. The water begins to come up and when the water of two different catchment basins reach each other, we will construct a separator or a `Watershed'. This approach is data dependent, hence the process time is different for different images. The H-FIFO is used to guarantee the nature of immersion, it means that we need two types of priority. All the points of an altitude `n' are processed before any point of altitude `n + 1'. And inside an altitude water propagates with a constant velocity in all directions from the source. This operator needs two images as input. An original image or it's gradient and the marker image. A classic way to construct the marker image is to build an image of minimal regions. Each minimal region has it's unique label. This label is the color of the water and will be used to see whether two different water touch each other. The algorithm at first fill the hierarchy FIFO with neighbors of all the regions who are not colored. Next it fetches the first pixel from the first non-empty FIFO and treats this pixel. This pixel will take the color of its neighbor, and all the neighbors who are not already in the H-FIFO are put in their correspondent FIFO. The process is over when the H-FIFO is empty. The result is a segmented and labeled image.
Building large area CZT imaging detectors for a wide-field hard X-ray telescope—ProtoEXIST1
NASA Astrophysics Data System (ADS)
Hong, J.; Allen, B.; Grindlay, J.; Chammas, N.; Barthelemy, S.; Baker, R.; Gehrels, N.; Nelson, K. E.; Labov, S.; Collins, J.; Cook, W. R.; McLean, R.; Harrison, F.
2009-07-01
We have constructed a moderately large area (32cm), fine pixel (2.5 mm pixel, 5 mm thick) CZT imaging detector which constitutes the first section of a detector module (256cm) developed for a balloon-borne wide-field hard X-ray telescope, ProtoEXIST1. ProtoEXIST1 is a prototype for the High Energy Telescope (HET) in the Energetic X-ray imaging Survey Telescope (EXIST), a next generation space-borne multi-wavelength telescope. We have constructed a large (nearly gapless) detector plane through a modularization scheme by tiling of a large number of 2cm×2cm CZT crystals. Our innovative packaging method is ideal for many applications such as coded-aperture imaging, where a large, continuous detector plane is desirable for the optimal performance. Currently we have been able to achieve an energy resolution of 3.2 keV (FWHM) at 59.6 keV on average, which is exceptional considering the moderate pixel size and the number of detectors in simultaneous operation. We expect to complete two modules (512cm) within the next few months as more CZT becomes available. We plan to test the performance of these detectors in a near space environment in a series of high altitude balloon flights, the first of which is scheduled for Fall 2009. These detector modules are the first in a series of progressively more sophisticated detector units and packaging schemes planned for ProtoEXIST2 & 3, which will demonstrate the technology required for the advanced CZT imaging detectors (0.6 mm pixel, 4.5m area) required in EXIST/HET.
Dynamic light scattering microscopy
NASA Astrophysics Data System (ADS)
Dzakpasu, Rhonda
An optical microscope technique, dynamic light scattering microscopy (DLSM) that images dynamically scattered light fluctuation decay rates is introduced. Using physical optics we show theoretically that within the optical resolution of the microscope, relative motions between scattering centers are sufficient to produce significant phase variations resulting in interference intensity fluctuations in the image plane. The time scale for these intensity fluctuations is predicted. The spatial coherence distance defining the average distance between constructive and destructive interference in the image plane is calculated and compared with the pixel size. We experimentally tested DLSM on polystyrene latex nanospheres and living macrophage cells. In order to record these rapid fluctuations, on a slow progressive scan CCD camera, we used a thin laser line of illumination on the sample such that only a single column of pixels in the CCD camera is illuminated. This allowed the use of the rate of the column-by-column readout transfer process as the acquisition rate of the camera. This manipulation increased the data acquisition rate by at least an order of magnitude in comparison to conventional CCD cameras rates defined by frames/s. Analysis of the observed fluctuations provides information regarding the rates of motion of the scattering centers. These rates, acquired from each position on the sample are used to create a spatial map of the fluctuation decay rates. Our experiments show that with this technique, we are able to achieve a good signal-to-noise ratio and can monitor fast intensity fluctuations, on the order of milliseconds. DLSM appears to provide dynamic information about fast motions within cells at a sub-optical resolution scale and provides a new kind of spatial contrast.
Hybrid label-free multiphoton and optoacoustic microscopy (MPOM)
NASA Astrophysics Data System (ADS)
Soliman, Dominik; Tserevelakis, George J.; Omar, Murad; Ntziachristos, Vasilis
2015-07-01
Many biological applications require a simultaneous observation of different anatomical features. However, unless potentially harmful staining of the specimens is employed, individual microscopy techniques do generally not provide multi-contrast capabilities. We present a hybrid microscope integrating optoacoustic microscopy and multiphoton microscopy, including second-harmonic generation, into a single device. This combined multiphoton and optoacoustic microscope (MPOM) offers visualization of a broad range of structures by employing different contrast mechanisms and at the same time enables pure label-free imaging of biological systems. We investigate the relative performance of the two microscopy modalities and demonstrate their multi-contrast abilities through the label-free imaging of a zebrafish larva ex vivo, simultaneously visualizing muscles and pigments. This hybrid microscopy application bears great potential for developmental biology studies, enabling more comprehensive information to be obtained from biological specimens without the necessity of staining.
Vectorized image segmentation via trixel agglomeration
Prasad, Lakshman [Los Alamos, NM; Skourikhine, Alexei N [Los Alamos, NM
2006-10-24
A computer implemented method transforms an image comprised of pixels into a vectorized image specified by a plurality of polygons that can be subsequently used to aid in image processing and understanding. The pixelated image is processed to extract edge pixels that separate different colors and a constrained Delaunay triangulation of the edge pixels forms a plurality of triangles having edges that cover the pixelated image. A color for each one of the plurality of triangles is determined from the color pixels within each triangle. A filter is formed with a set of grouping rules related to features of the pixelated image and applied to the plurality of triangle edges to merge adjacent triangles consistent with the filter into polygons having a plurality of vertices. The pixelated image may be then reformed into an array of the polygons, that can be represented collectively and efficiently by standard vector image.
Mechanical vibration compensation method for 3D+t multi-particle tracking in microscopic volumes.
Pimentel, A; Corkidi, G
2009-01-01
The acquisition and analysis of data in microscopic systems with spatiotemporal evolution is a very relevant topic. In this work, we describe a method to optimize an experimental setup for acquiring and processing spatiotemporal (3D+t) data in microscopic systems. The method is applied to a three-dimensional multi-tracking and analysis system of free-swimming sperm trajectories previously developed. The experimental set uses a piezoelectric device making oscillate a large focal-distance objective mounted on an inverted microscope (over its optical axis) to acquire stacks of images at a high frame rate over a depth on the order of 250 microns. A problem arise when the piezoelectric device oscillates, in such a way that a vibration is transmitted to the whole microscope, inducing undesirable 3D vibrations to the whole set. For this reason, as a first step, the biological preparation was isolated from the body of the microscope to avoid modifying the free swimming pattern of the microorganism due to the transmission of these vibrations. Nevertheless, as the image capturing device is mechanically attached to the "vibrating" microscope, the resulting acquired data are contaminated with an undesirable 3D movement that biases the original trajectory of these high speed moving cells. The proposed optimization method determines the functional form of these 3D oscillations to neutralize them from the original acquired data set. Given the spatial scale of the system, the added correction increases significantly the data accuracy. The optimized system may be very useful in a wide variety of 3D+t applications using moving optical devices.
VizieR Online Data Catalog: Cheshire Cat galaxies: redshifts and magnitudes (Irwin+, 2015)
NASA Astrophysics Data System (ADS)
Irwin, J. A.; Dupke, R.; Carrasco, E. R.; Maksym, W. P.; Johnson, L.; White, R. E., III
2017-09-01
The optical observations (imaging and spectroscopy) were performed with the Gemini Multi-Object Spectrograph (hereafter GMOS; Hook et al. 2004PASP..116..425H) at the Gemini North Telescope in Hawaii, in queue mode, as part of the program GN-2011A-Q-25. The direct images were recorded through the r' and i' filters during the night of 2011 January 4, in dark time, with seeing median values of 0.8" and 0.9" for the r' and i' filters, respectively. The night was not photometric. Three 300 s exposures (binned by two in both axes, with pixel scale of 0.146") were observed in each filter. Offsets between exposures were used to take into account the gaps between the CCDs (37 un-binned pixels) and for cosmic ray removal. (1 data file).
Forbes, Ruaridh; Makhija, Varun; Veyrinas, Kévin; Stolow, Albert; Lee, Jason W L; Burt, Michael; Brouard, Mark; Vallance, Claire; Wilkinson, Iain; Lausten, Rune; Hockett, Paul
2017-07-07
The Pixel-Imaging Mass Spectrometry (PImMS) camera allows for 3D charged particle imaging measurements, in which the particle time-of-flight is recorded along with (x, y) position. Coupling the PImMS camera to an ultrafast pump-probe velocity-map imaging spectroscopy apparatus therefore provides a route to time-resolved multi-mass ion imaging, with both high count rates and large dynamic range, thus allowing for rapid measurements of complex photofragmentation dynamics. Furthermore, the use of vacuum ultraviolet wavelengths for the probe pulse allows for an enhanced observation window for the study of excited state molecular dynamics in small polyatomic molecules having relatively high ionization potentials. Herein, preliminary time-resolved multi-mass imaging results from C 2 F 3 I photolysis are presented. The experiments utilized femtosecond VUV and UV (160.8 nm and 267 nm) pump and probe laser pulses in order to demonstrate and explore this new time-resolved experimental ion imaging configuration. The data indicate the depth and power of this measurement modality, with a range of photofragments readily observed, and many indications of complex underlying wavepacket dynamics on the excited state(s) prepared.
Selective document image data compression technique
Fu, C.Y.; Petrich, L.I.
1998-05-19
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.
Selective document image data compression technique
Fu, Chi-Yung; Petrich, Loren I.
1998-01-01
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)
Image Processing for Binarization Enhancement via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor)
2009-01-01
A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.
Landsat image registration for agricultural applications
NASA Technical Reports Server (NTRS)
Wolfe, R. H., Jr.; Juday, R. D.; Wacker, A. G.; Kaneko, T.
1982-01-01
An image registration system has been developed at the NASA Johnson Space Center (JSC) to spatially align multi-temporal Landsat acquisitions for use in agriculture and forestry research. Working in conjunction with the Master Data Processor (MDP) at the Goddard Space Flight Center, it functionally replaces the long-standing LACIE Registration Processor as JSC's data supplier. The system represents an expansion of the techniques developed for the MDP and LACIE Registration Processor, and it utilizes the experience gained in an IBM/JSC effort evaluating the performance of the latter. These techniques are discussed in detail. Several tests were developed to evaluate the registration performance of the system. The results indicate that 1/15-pixel accuracy (about 4m for Landsat MSS) is achievable in ideal circumstances, sub-pixel accuracy (often to 0.2 pixel or better) was attained on a representative set of U.S. acquisitions, and a success rate commensurate with the LACIE Registration Processor was realized. The system has been employed in a production mode on U.S. and foreign data, and a performance similar to the earlier tests has been noted.
Okamoto, Takumi; Koide, Tetsushi; Sugi, Koki; Shimizu, Tatsuya; Anh-Tuan Hoang; Tamaki, Toru; Raytchev, Bisser; Kaneda, Kazufumi; Kominami, Yoko; Yoshida, Shigeto; Mieno, Hiroshi; Tanaka, Shinji
2015-08-01
With the increase of colorectal cancer patients in recent years, the needs of quantitative evaluation of colorectal cancer are increased, and the computer-aided diagnosis (CAD) system which supports doctor's diagnosis is essential. In this paper, a hardware design of type identification module in CAD system for colorectal endoscopic images with narrow band imaging (NBI) magnification is proposed for real-time processing of full high definition image (1920 × 1080 pixel). A pyramid style image segmentation with SVMs for multi-size scan windows, which can be implemented on an FPGA with small circuit area and achieve high accuracy, is proposed for actual complex colorectal endoscopic images.
Klemm, Matthias; Schweitzer, Dietrich; Peters, Sven; Sauer, Lydia; Hammer, Martin; Haueisen, Jens
2015-01-01
Fluorescence lifetime imaging ophthalmoscopy (FLIO) is a new technique for measuring the in vivo autofluorescence intensity decays generated by endogenous fluorophores in the ocular fundus. Here, we present a software package called FLIM eXplorer (FLIMX) for analyzing FLIO data. Specifically, we introduce a new adaptive binning approach as an optimal tradeoff between the spatial resolution and the number of photons required per pixel. We also expand existing decay models (multi-exponential, stretched exponential, spectral global analysis, incomplete decay) to account for the layered structure of the eye and present a method to correct for the influence of the crystalline lens fluorescence on the retina fluorescence. Subsequently, the Holm-Bonferroni method is applied to FLIO measurements to allow for group comparisons between patients and controls on the basis of fluorescence lifetime parameters. The performance of the new approaches was evaluated in five experiments. Specifically, we evaluated static and adaptive binning in a diabetes mellitus patient, we compared the different decay models in a healthy volunteer and performed a group comparison between diabetes patients and controls. An overview of the visualization capabilities and a comparison of static and adaptive binning is shown for a patient with macular hole. FLIMX's applicability to fluorescence lifetime imaging microscopy is shown in the ganglion cell layer of a porcine retina sample, obtained by a laser scanning microscope using two-photon excitation.
Coherent beam control through inhomogeneous media in multi-photon microscopy
NASA Astrophysics Data System (ADS)
Paudel, Hari Prasad
Multi-photon fluorescence microscopy has become a primary tool for high-resolution deep tissue imaging because of its sensitivity to ballistic excitation photons in comparison to scattered excitation photons. The imaging depth of multi-photon microscopes in tissue imaging is limited primarily by background fluorescence that is generated by scattered light due to the random fluctuations in refractive index inside the media, and by reduced intensity in the ballistic focal volume due to aberrations within the tissue and at its interface. We built two multi-photon adaptive optics (AO) correction systems, one for combating scattering and aberration problems, and another for compensating interface aberrations. For scattering correction a MEMS segmented deformable mirror (SDM) was inserted at a plane conjugate to the objective back-pupil plane. The SDM can pre-compensate for light scattering by coherent combination of the scattered light to make an apparent focus even at a depths where negligible ballistic light remains (i.e. ballistic limit). This problem was approached by investigating the spatial and temporal focusing characteristics of a broad-band light source through strongly scattering media. A new model was developed for coherent focus enhancement through or inside the strongly media based on the initial speckle contrast. A layer of fluorescent beads under a mouse skull was imaged using an iterative coherent beam control method in the prototype two-photon microscope to demonstrate the technique. We also adapted an AO correction system to an existing in three-photon microscope in a collaborator lab at Cornell University. In the second AO correction approach a continuous deformable mirror (CDM) is placed at a plane conjugate to the plane of an interface aberration. We demonstrated that this "Conjugate AO" technique yields a large field-of-view (FOV) advantage in comparison to Pupil AO. Further, we showed that the extended FOV in conjugate AO is maintained over a relatively large axial misalignment of the conjugate planes of the CDM and the aberrating interface. This dissertation advances the field of microscopy by providing new models and techniques for imaging deeply within strongly scattering tissue, and by describing new adaptive optics approaches to extending imaging FOV due to sample aberrations.
Autonomous quantum to classical transitions and the generalized imaging theorem
NASA Astrophysics Data System (ADS)
Briggs, John S.; Feagin, James M.
2016-03-01
The mechanism of the transition of a dynamical system from quantum to classical mechanics is of continuing interest. Practically it is of importance for the interpretation of multi-particle coincidence measurements performed at macroscopic distances from a microscopic reaction zone. Here we prove the generalized imaging theorem which shows that the spatial wave function of any multi-particle quantum system, propagating over distances and times large on an atomic scale but still microscopic, and subject to deterministic external fields and particle interactions, becomes proportional to the initial momentum wave function where the position and momentum coordinates define a classical trajectory. Currently, the quantum to classical transition is considered to occur via decoherence caused by stochastic interaction with an environment. The imaging theorem arises from unitary Schrödinger propagation and so is valid without any environmental interaction. It implies that a simultaneous measurement of both position and momentum will define a unique classical trajectory, whereas a less complete measurement of say position alone can lead to quantum interference effects.
Parallel detection experiment of fluorescence confocal microscopy using DMD.
Wang, Qingqing; Zheng, Jihong; Wang, Kangni; Gui, Kun; Guo, Hanming; Zhuang, Songlin
2016-05-01
Parallel detection of fluorescence confocal microscopy (PDFCM) system based on Digital Micromirror Device (DMD) is reported in this paper in order to realize simultaneous multi-channel imaging and improve detection speed. DMD is added into PDFCM system, working to take replace of the single traditional pinhole in the confocal system, which divides the laser source into multiple excitation beams. The PDFCM imaging system based on DMD is experimentally set up. The multi-channel image of fluorescence signal of potato cells sample is detected by parallel lateral scanning in order to verify the feasibility of introducing the DMD into fluorescence confocal microscope. In addition, for the purpose of characterizing the microscope, the depth response curve is also acquired. The experimental result shows that in contrast to conventional microscopy, the DMD-based PDFCM system has higher axial resolution and faster detection speed, which may bring some potential benefits in the biology and medicine analysis. SCANNING 38:234-239, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Autonomous quantum to classical transitions and the generalized imaging theorem
Briggs, John S.; Feagin, James M.
2016-03-16
The mechanism of the transition of a dynamical system from quantum to classical mechanics is of continuing interest. Practically it is of importance for the interpretation of multi-particle coincidence measurements performed at macroscopic distances from a microscopic reaction zone. We prove the generalized imaging theorem which shows that the spatial wave function of any multi-particle quantum system, propagating over distances and times large on an atomic scale but still microscopic, and subject to deterministic external fields and particle interactions, becomes proportional to the initial momentum wave function where the position and momentum coordinates define a classical trajectory. Now, the quantummore » to classical transition is considered to occur via decoherence caused by stochastic interaction with an environment. The imaging theorem arises from unitary Schrödinger propagation and so is valid without any environmental interaction. It implies that a simultaneous measurement of both position and momentum will define a unique classical trajectory, whereas a less complete measurement of say position alone can lead to quantum interference effects.« less
Núñez, Jorge I; Farmer, Jack D; Sellar, R Glenn; Swayze, Gregg A; Blaney, Diana L
2014-02-01
Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars. Mars-Microscopic imager-Multispectral imaging-Spectroscopy-Habitability-Arm instrument.
Fundamental performance differences between CMOS and CCD imagers, part IV
NASA Astrophysics Data System (ADS)
Janesick, James; Pinter, Jeff; Potter, Robert; Elliott, Tom; Andrews, James; Tower, John; Grygon, Mark; Keller, Dave
2010-07-01
This paper is a continuation of past papers written on fundamental performance differences of scientific CMOS and CCD imagers. New characterization results presented below include: 1). a new 1536 × 1536 × 8μm 5TPPD pixel CMOS imager, 2). buried channel MOSFETs for random telegraph noise (RTN) and threshold reduction, 3) sub-electron noise pixels, 4) 'MIM pixel' for pixel sensitivity (V/e-) control, 5) '5TPPD RING pixel' for large pixel, high-speed charge transfer applications, 6) pixel-to-pixel blooming control, 7) buried channel photo gate pixels and CMOSCCDs, 8) substrate bias for deep depletion CMOS imagers, 9) CMOS dark spikes and dark current issues and 10) high energy radiation damage test data. Discussions are also given to a 1024 × 1024 × 16 um 5TPPD pixel imager currently in fabrication and new stitched CMOS imagers that are in the design phase including 4k × 4k × 10 μm and 10k × 10k × 10 um imager formats.
Multi-energy x-ray detector calibration for Te and impurity density (nZ) measurements of MCF plasmas
NASA Astrophysics Data System (ADS)
Maddox, J.; Pablant, N.; Efthimion, P.; Delgado-Aparicio, L.; Hill, K. W.; Bitter, M.; Reinke, M. L.; Rissi, M.; Donath, T.; Luethi, B.; Stratton, B.
2016-11-01
Soft x-ray detection with the new "multi-energy" PILATUS3 detector systems holds promise as a magnetically confined fusion (MCF) plasma diagnostic for ITER and beyond. The measured x-ray brightness can be used to determine impurity concentrations, electron temperatures, ne 2 Z eff products, and to probe the electron energy distribution. However, in order to be effective, these detectors which are really large arrays of detectors with photon energy gating capabilities must be precisely calibrated for each pixel. The energy-dependence of the detector response of the multi-energy PILATUS3 system with 100 K pixels has been measured at Dectris Laboratory. X-rays emitted from a tube under high voltage bombard various elements such that they emit x-ray lines from Zr-Lα to Ag-Kα between 1.8 and 22.16 keV. Each pixel on the PILATUS3 can be set to a minimum energy threshold in the range from 1.6 to 25 keV. This feature allows a single detector to be sensitive to a variety of x-ray energies, so that it is possible to sample the energy distribution of the x-ray continuum and line-emission. PILATUS3 can be configured for 1D or 2D imaging of MCF plasmas with typical spatial energy and temporal resolution of 1 cm, 0.6 keV, and 5 ms, respectively.
Lensfree super-resolution holographic microscopy using wetting films on a chip
NASA Astrophysics Data System (ADS)
Mudanyali, Onur; Bishara, Waheb; Ozcan, Aydogan
2011-08-01
We investigate the use of wetting films to significantly improve the imaging performance of lensfree pixel super-resolution on-chip microscopy, achieving < 1 μm spatial resolution over a large imaging area of ~24 mm2. Formation of an ultra-thin wetting film over the specimen effectively creates a micro-lens effect over each object, which significantly improves the signal-to-noise-ratio and therefore the resolution of our lensfree images. We validate the performance of this approach through lensfree on-chip imaging of various objects having fine morphological features (with dimensions of e.g., ≤0.5 μm) such as Escherichia coli (E. coli), human sperm, Giardia lamblia trophozoites, polystyrene micro beads as well as red blood cells. These results are especially important for the development of highly sensitive field-portable microscopic analysis tools for resource limited settings.
NASA Astrophysics Data System (ADS)
Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.
2015-12-01
The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.
Díaz, Gloria; González, Fabio A; Romero, Eduardo
2009-04-01
Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This study presents an original method for quantification and classification of erythrocytes in stained thin blood films infected with Plasmodium falciparum. The proposed approach is composed of three main phases: a preprocessing step, which corrects luminance differences. A segmentation step that uses the normalized RGB color space for classifying pixels either as erythrocyte or background followed by an Inclusion-Tree representation that structures the pixel information into objects, from which erythrocytes are found. Finally, a two step classification process identifies infected erythrocytes and differentiates the infection stage, using a trained bank of classifiers. Additionally, user intervention is allowed when the approach cannot make a proper decision. Four hundred fifty malaria images were used for training and evaluating the method. Automatic identification of infected erythrocytes showed a specificity of 99.7% and a sensitivity of 94%. The infection stage was determined with an average sensitivity of 78.8% and average specificity of 91.2%.
Ultrafast photon counting applied to resonant scanning STED microscopy.
Wu, Xundong; Toro, Ligia; Stefani, Enrico; Wu, Yong
2015-01-01
To take full advantage of fast resonant scanning in super-resolution stimulated emission depletion (STED) microscopy, we have developed an ultrafast photon counting system based on a multigiga sample per second analogue-to-digital conversion chip that delivers an unprecedented 450 MHz pixel clock (2.2 ns pixel dwell time in each scan). The system achieves a large field of view (∼50 × 50 μm) with fast scanning that reduces photobleaching, and advances the time-gated continuous wave STED technology to the usage of resonant scanning with hardware-based time-gating. The assembled system provides superb signal-to-noise ratio and highly linear quantification of light that result in superior image quality. Also, the system design allows great flexibility in processing photon signals to further improve the dynamic range. In conclusion, we have constructed a frontier photon counting image acquisition system with ultrafast readout rate, excellent counting linearity, and with the capacity of realizing resonant-scanning continuous wave STED microscopy with online time-gated detection. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
Penrose high-dynamic-range imaging
NASA Astrophysics Data System (ADS)
Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian
2016-05-01
High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different exposures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) technology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout.
NASA Astrophysics Data System (ADS)
Beltrame, Francesco; Diaspro, Alberto; Fato, Marco; Martin, I.; Ramoino, Paola; Sobel, Irwin E.
1995-03-01
Confocal microscopy systems can be linked to 3D data oriented devices for the interactive navigation of the operator through a 3D object space. Sometimes, such environments are named `virtual reality' or `augmented reality' systems. We consider optical confocal laser scanning microscopy images, in fluorescence with various excitations and emissions, and versus time The aim of our study has been the quantitative spatial analysis of confocal data using the false-color composition technique. Starting from three 2D confocal fluorescent images at the same slice location in a given biological specimen, a new single image representation of all three parameters has been generated by the false-color technique on a HP 9000/735 workstation, connected to the confocal microscope. The color composite result of the mapping of the three parameters is displayed using a resolution of 24 bits per pixel. The operator may independently vary the mix of each of the three components in the false-color composite via three (R, G, B) mixing sliders. Furthermore, by using the pixel data in the three fluorescent component images, a 3D space containing the density distribution of these three parameters has been constructed. The histogram has been displayed in stereo: it can be used for clustering purposes from the operator, through an original thresholding algorithm.
Multilevel Space-Time Aggregation for Bright Field Cell Microscopy Segmentation and Tracking
Inglis, Tiffany; De Sterck, Hans; Sanders, Geoffrey; Djambazian, Haig; Sladek, Robert; Sundararajan, Saravanan; Hudson, Thomas J.
2010-01-01
A multilevel aggregation method is applied to the problem of segmenting live cell bright field microscope images. The method employed is a variant of the so-called “Segmentation by Weighted Aggregation” technique, which itself is based on Algebraic Multigrid methods. The variant of the method used is described in detail, and it is explained how it is tailored to the application at hand. In particular, a new scale-invariant “saliency measure” is proposed for deciding when aggregates of pixels constitute salient segments that should not be grouped further. It is shown how segmentation based on multilevel intensity similarity alone does not lead to satisfactory results for bright field cells. However, the addition of multilevel intensity variance (as a measure of texture) to the feature vector of each aggregate leads to correct cell segmentation. Preliminary results are presented for applying the multilevel aggregation algorithm in space time to temporal sequences of microscope images, with the goal of obtaining space-time segments (“object tunnels”) that track individual cells. The advantages and drawbacks of the space-time aggregation approach for segmentation and tracking of live cells in sequences of bright field microscope images are presented, along with a discussion on how this approach may be used in the future work as a building block in a complete and robust segmentation and tracking system. PMID:20467468
Optical design considerations when imaging the fundus with an adaptive optics correction
NASA Astrophysics Data System (ADS)
Wang, Weiwei; Campbell, Melanie C. W.; Kisilak, Marsha L.; Boyd, Shelley R.
2008-06-01
Adaptive Optics (AO) technology has been used in confocal scanning laser ophthalmoscopes (CSLO) which are analogous to confocal scanning laser microscopes (CSLM) with advantages of real-time imaging, increased image contrast, a resistance to image degradation by scattered light, and improved optical sectioning. With AO, the instrumenteye system can have low enough aberrations for the optical quality to be limited primarily by diffraction. Diffraction-limited, high resolution imaging would be beneficial in the understanding and early detection of eye diseases such as diabetic retinopathy. However, to maintain diffraction-limited imaging, sufficient pixel sampling over the field of view is required, resulting in the need for increased data acquisition rates for larger fields. Imaging over smaller fields may be a disadvantage with clinical subjects because of fixation instability and the need to examine larger areas of the retina. Reduction in field size also reduces the amount of light sampled per pixel, increasing photon noise. For these reasons, we considered an instrument design with a larger field of view. When choosing scanners to be used in an AOCSLO, the ideal frame rate should be above the flicker fusion rate for the human observer and would also allow user control of targets projected onto the retina. In our AOCSLO design, we have studied the tradeoffs between field size, frame rate and factors affecting resolution. We will outline optical approaches to overcome some of these tradeoffs and still allow detection of the earliest changes in the fundus in diabetic retinopathy.
Fast Confocal Raman Imaging Using a 2-D Multifocal Array for Parallel Hyperspectral Detection.
Kong, Lingbo; Navas-Moreno, Maria; Chan, James W
2016-01-19
We present the development of a novel confocal hyperspectral Raman microscope capable of imaging at speeds up to 100 times faster than conventional point-scan Raman microscopy under high noise conditions. The microscope utilizes scanning galvomirrors to generate a two-dimensional (2-D) multifocal array at the sample plane, generating Raman signals simultaneously at each focus of the array pattern. The signals are combined into a single beam and delivered through a confocal pinhole before being focused through the slit of a spectrometer. To separate the signals from each row of the array, a synchronized scan mirror placed in front of the spectrometer slit positions the Raman signals onto different pixel rows of the detector. We devised an approach to deconvolve the superimposed signals and retrieve the individual spectra at each focal position within a given row. The galvomirrors were programmed to scan different focal arrays following Hadamard encoding patterns. A key feature of the Hadamard detection is the reconstruction of individual spectra with improved signal-to-noise ratio. Using polystyrene beads as test samples, we demonstrated not only that our system images faster than a conventional point-scan method but that it is especially advantageous under noisy conditions, such as when the CCD detector operates at fast read-out rates and high temperatures. This is the first demonstration of multifocal confocal Raman imaging in which parallel spectral detection is implemented along both axes of the CCD detector chip. We envision this novel 2-D multifocal spectral detection technique can be used to develop faster imaging spontaneous Raman microscopes with lower cost detectors.
Varying ultrasound power level to distinguish surgical instruments and tissue.
Ren, Hongliang; Anuraj, Banani; Dupont, Pierre E
2018-03-01
We investigate a new framework of surgical instrument detection based on power-varying ultrasound images with simple and efficient pixel-wise intensity processing. Without using complicated feature extraction methods, we identified the instrument with an estimated optimal power level and by comparing pixel values of varying transducer power level images. The proposed framework exploits the physics of ultrasound imaging system by varying the transducer power level to effectively distinguish metallic surgical instruments from tissue. This power-varying image-guidance is motivated from our observations that ultrasound imaging at different power levels exhibit different contrast enhancement capabilities between tissue and instruments in ultrasound-guided robotic beating-heart surgery. Using lower transducer power levels (ranging from 40 to 75% of the rated lowest ultrasound power levels of the two tested ultrasound scanners) can effectively suppress the strong imaging artifacts from metallic instruments and thus, can be utilized together with the images from normal transducer power levels to enhance the separability between instrument and tissue, improving intraoperative instrument tracking accuracy from the acquired noisy ultrasound volumetric images. We performed experiments in phantoms and ex vivo hearts in water tank environments. The proposed multi-level power-varying ultrasound imaging approach can identify robotic instruments of high acoustic impedance from low-signal-to-noise-ratio ultrasound images by power adjustments.
NASA Astrophysics Data System (ADS)
Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger
2015-04-01
The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.
Segmentation of white rat sperm image
NASA Astrophysics Data System (ADS)
Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan
2011-11-01
The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.
Image Edge Extraction via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)
2008-01-01
A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.
NASA Astrophysics Data System (ADS)
Pani, R.; Pellegrini, R.; Betti, M.; De Vincentis, G.; Cinti, M. N.; Bennati, P.; Vittorini, F.; Casali, V.; Mattioli, M.; Orsolini Cencelli, V.; Navarria, F.; Bollini, D.; Moschini, G.; Iurlaro, G.; Montani, L.; de Notaristefani, F.
2007-02-01
The principal limiting factor in the clinical acceptance of scintimammography is certainly its low sensitivity for cancers sized <1 cm, mainly due to the lack of equipment specifically designed for breast imaging. The National Institute of Nuclear Physics (INFN) has been developing a new scintillation camera based on Lanthanum tri-Bromide Cerium-doped crystal (LaBr 3:Ce), that demonstrating superior imaging performances with respect to the dedicated scintillation γ-camera that was previously developed. The proposed detector consists of continuous LaBr 3:Ce scintillator crystal coupled to a Hamamatsu H8500 Flat Panel PMT. One centimeter thick crystal has been chosen to increase crystal detection efficiency. In this paper, we propose a comparison and evaluation between lanthanum γ-camera and a Multi PSPMT camera, NaI(Tl) discrete pixel based, previously developed under "IMI" Italian project for technological transfer of INFN. A phantom study has been developed to test both the cameras before introducing them in clinical trials. High resolution scans produced by LaBr 3:Ce camera showed higher tumor contrast with a detailed imaging of uptake area than pixellated NaI(Tl) dedicated camera. Furthermore, with the lanthanum camera, the Signal-to-Noise Ratio ( SNR) value was increased for a lesion as small as 5 mm, with a consequent strong improvement in detectability.
Context-Aided Tracking with Adaptive Hyperspectral Imagery
2011-06-01
narrow spectral bands (e). . . . . . . . . . . . . . . . . . . . . 14 ix Figure Page 2.2. An illustration of a small portion of a digital micromirror ...incorporates two light paths: imaging and spectroscopy. Each pixel is steered towards a light path indepen- dently via the digital micromirror device (DMD...With the advent of digital micromirror device (DMD) arrays (DMA), the Rochester Institute of Technology Multi-Object Spectrometer (RITMOS) [36
Wide field-of-view, multi-region two-photon imaging of neuronal activity in the mammalian brain
Stirman, Jeffrey N.; Smith, Ikuko T.; Kudenov, Michael W.; Smith, Spencer L.
2016-01-01
Two-photon calcium imaging provides an optical readout of neuronal activity in populations of neurons with subcellular resolution. However, conventional two-photon imaging systems are limited in their field of view to ~1 mm2, precluding the visualization of multiple cortical areas simultaneously. Here, we demonstrate a two-photon microscope with an expanded field of view (>9.5 mm2) for rapidly reconfigurable simultaneous scanning of widely separated populations of neurons. We custom designed and assembled an optimized scan engine, objective, and two independently positionable, temporally multiplexed excitation pathways. We used this new microscope to measure activity correlations between two cortical visual areas in mice during visual processing. PMID:27347754
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors
Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.
2016-01-01
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643
Large-scale time-lapse microscopy of Oct4 expression in human embryonic stem cell colonies.
Bhadriraju, Kiran; Halter, Michael; Amelot, Julien; Bajcsy, Peter; Chalfoun, Joe; Vandecreme, Antoine; Mallon, Barbara S; Park, Kye-Yoon; Sista, Subhash; Elliott, John T; Plant, Anne L
2016-07-01
Identification and quantification of the characteristics of stem cell preparations is critical for understanding stem cell biology and for the development and manufacturing of stem cell based therapies. We have developed image analysis and visualization software that allows effective use of time-lapse microscopy to provide spatial and dynamic information from large numbers of human embryonic stem cell colonies. To achieve statistically relevant sampling, we examined >680 colonies from 3 different preparations of cells over 5days each, generating a total experimental dataset of 0.9 terabyte (TB). The 0.5 Giga-pixel images at each time point were represented by multi-resolution pyramids and visualized using the Deep Zoom Javascript library extended to support viewing Giga-pixel images over time and extracting data on individual colonies. We present a methodology that enables quantification of variations in nominally-identical preparations and between colonies, correlation of colony characteristics with Oct4 expression, and identification of rare events. Copyright © 2016. Published by Elsevier B.V.
Investigation of Parallax Issues for Multi-Lens Multispectral Camera Band Co-Registration
NASA Astrophysics Data System (ADS)
Jhan, J. P.; Rau, J. Y.; Haala, N.; Cramer, M.
2017-08-01
The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.
Circuit for high resolution decoding of multi-anode microchannel array detectors
NASA Technical Reports Server (NTRS)
Kasle, David B. (Inventor)
1995-01-01
A circuit for high resolution decoding of multi-anode microchannel array detectors consisting of input registers accepting transient inputs from the anode array; anode encoding logic circuits connected to the input registers; midpoint pipeline registers connected to the anode encoding logic circuits; and pixel decoding logic circuits connected to the midpoint pipeline registers is described. A high resolution algorithm circuit operates in parallel with the pixel decoding logic circuit and computes a high resolution least significant bit to enhance the multianode microchannel array detector's spatial resolution by halving the pixel size and doubling the number of pixels in each axis of the anode array. A multiplexer is connected to the pixel decoding logic circuit and allows a user selectable pixel address output according to the actual multi-anode microchannel array detector anode array size. An output register concatenates the high resolution least significant bit onto the standard ten bit pixel address location to provide an eleven bit pixel address, and also stores the full eleven bit pixel address. A timing and control state machine is connected to the input registers, the anode encoding logic circuits, and the output register for managing the overall operation of the circuit.
Simultaneous dual-color fluorescence microscope: a characterization study.
Li, Zheng; Chen, Xiaodong; Ren, Liqiang; Song, Jie; Li, Yuhua; Zheng, Bin; Liu, Hong
2013-01-01
High spatial resolution and geometric accuracy is crucial for chromosomal analysis of clinical cytogenetic applications. High resolution and rapid simultaneous acquisition of multiple fluorescent wavelengths can be achieved by utilizing concurrent imaging with multiple detectors. However, such class of microscopic systems functions differently from traditional fluorescence microscopes. To develop a practical characterization framework to assess and optimize the performance of a high resolution and dual-color fluorescence microscope designed for clinical chromosomal analysis. A dual-band microscopic imaging system utilizes a dichroic mirror, two sets of specially selected optical filters, and two detectors to simultaneously acquire two fluorescent wavelengths. The system's geometric distortion, linearity, the modulation transfer function, and the dual detectors' alignment were characterized. Experiment results show that the geometric distortion at lens periphery is less than 1%. Both fluorescent channels show linear signal responses, but there exists discrepancy between the two due to the detectors' non-uniform response ratio to different wavelengths. In terms of the spatial resolution, the two contrast transfer function curves trend agreeably with the spatial frequency. The alignment measurement allows quantitatively assessing the cameras' alignment. A result image of adjusted alignment is demonstrated to show the reduced discrepancy by using the alignment measurement method. In this paper, we present a system characterization study and its methods for a specially designed imaging system for clinical cytogenetic applications. The presented characterization methods are not only unique to this dual-color imaging system but also applicable to evaluation and optimization of other similar multi-color microscopic image systems for improving their clinical utilities for future cytogenetic applications.
NASA Astrophysics Data System (ADS)
Khakimov, R. I.; Henson, B. M.; Shin, D. K.; Hodgman, S. S.; Dall, R. G.; Baldwin, K. G. H.; Truscott, A. G.
2016-12-01
Ghost imaging is a counter-intuitive phenomenon—first realized in quantum optics—that enables the image of a two-dimensional object (mask) to be reconstructed using the spatio-temporal properties of a beam of particles with which it never interacts. Typically, two beams of correlated photons are used: one passes through the mask to a single-pixel (bucket) detector while the spatial profile of the other is measured by a high-resolution (multi-pixel) detector. The second beam never interacts with the mask. Neither detector can reconstruct the mask independently, but temporal cross-correlation between the two beams can be used to recover a ‘ghost’ image. Here we report the realization of ghost imaging using massive particles instead of photons. In our experiment, the two beams are formed by correlated pairs of ultracold, metastable helium atoms, which originate from s-wave scattering of two colliding Bose-Einstein condensates. We use higher-order Kapitza-Dirac scattering to generate a large number of correlated atom pairs, enabling the creation of a clear ghost image with submillimetre resolution. Future extensions of our technique could lead to the realization of ghost interference, and enable tests of Einstein-Podolsky-Rosen entanglement and Bell’s inequalities with atoms.
The plant virus microscope image registration method based on mismatches removing.
Wei, Lifang; Zhou, Shucheng; Dong, Heng; Mao, Qianzhuo; Lin, Jiaxiang; Chen, Riqing
2016-01-01
The electron microscopy is one of the major means to observe the virus. The view of virus microscope images is limited by making specimen and the size of the camera's view field. To solve this problem, the virus sample is produced into multi-slice for information fusion and image registration techniques are applied to obtain large field and whole sections. Image registration techniques have been developed in the past decades for increasing the camera's field of view. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Alternatively, the methods are conceived just to provide visually pleasant registration for high overlap ratio image sequence. This work presents a method for virus microscope image registration acquired with detailed visual information and subpixel accuracy, even when overlap ratio of image sequence is 10% or less. The method proposed focus on the correspondence set and interimage transformation. A mismatch removal strategy is proposed by the spatial consistency and the components of keypoint to enrich the correspondence set. And the translation model parameter as well as tonal inhomogeneities is corrected by the hierarchical estimation and model select. In the experiments performed, we tested different registration approaches and virus images, confirming that the translation model is not always stationary, despite the fact that the images of the sample come from the same sequence. The mismatch removal strategy makes building registration of virus microscope images at subpixel accuracy easier and optional parameters for building registration according to the hierarchical estimation and model select strategies make the proposed method high precision and reliable for low overlap ratio image sequence. Copyright © 2015 Elsevier Ltd. All rights reserved.
New DTM Extraction Approach from Airborne Images Derived Dsm
NASA Astrophysics Data System (ADS)
Mousa, Y. A.; Helmholz, P.; Belton, D.
2017-05-01
In this work, a new filtering approach is proposed for a fully automatic Digital Terrain Model (DTM) extraction from very high resolution airborne images derived Digital Surface Models (DSMs). Our approach represents an enhancement of the existing DTM extraction algorithm Multi-directional and Slope Dependent (MSD) by proposing parameters that are more reliable for the selection of ground pixels and the pixelwise classification. To achieve this, four main steps are implemented: Firstly, 8 well-distributed scanlines are used to search for minima as a ground point within a pre-defined filtering window size. These selected ground points are stored with their positions on a 2D surface to create a network of ground points. Then, an initial DTM is created using an interpolation method to fill the gaps in the 2D surface. Afterwards, a pixel to pixel comparison between the initial DTM and the original DSM is performed utilising pixelwise classification of ground and non-ground pixels by applying a vertical height threshold. Finally, the pixels classified as non-ground are removed and the remaining holes are filled. The approach is evaluated using the Vaihingen benchmark dataset provided by the ISPRS working group III/4. The evaluation includes the comparison of our approach, denoted as Network of Ground Points (NGPs) algorithm, with the DTM created based on MSD as well as a reference DTM generated from LiDAR data. The results show that our proposed approach over performs the MSD approach.
Yang, Haw; Welsher, Kevin
2016-11-15
A system and method for non-invasively tracking a particle in a sample is disclosed. The system includes a 2-photon or confocal laser scanning microscope (LSM) and a particle-holding device coupled to a stage with X-Y and Z position control. The system also includes a tracking module having a tracking excitation laser, X-Y and Z radiation-gathering components configured to detect deviations of the particle in an X-Y and Z directions. The system also includes a processor coupled to the X-Y and Z radiation gathering components, generate control signals configured to drive the stage X-Y and Z position controls to track the movement of the particle. The system may also include a synchronization module configured to generate LSM pixels stamped with stage position and a processing module configured to generate a 3D image showing the 3D trajectory of a particle using the LSM pixels stamped with stage position.