Identification of suitable fundus images using automated quality assessment methods.
Şevik, Uğur; Köse, Cemal; Berber, Tolga; Erdöl, Hidayet
2014-04-01
Retinal image quality assessment (IQA) is a crucial process for automated retinal image analysis systems to obtain an accurate and successful diagnosis of retinal diseases. Consequently, the first step in a good retinal image analysis system is measuring the quality of the input image. We present an approach for finding medically suitable retinal images for retinal diagnosis. We used a three-class grading system that consists of good, bad, and outlier classes. We created a retinal image quality dataset with a total of 216 consecutive images called the Diabetic Retinopathy Image Database. We identified the suitable images within the good images for automatic retinal image analysis systems using a novel method. Subsequently, we evaluated our retinal image suitability approach using the Digital Retinal Images for Vessel Extraction and Standard Diabetic Retinopathy Database Calibration level 1 public datasets. The results were measured through the F1 metric, which is a harmonic mean of precision and recall metrics. The highest F1 scores of the IQA tests were 99.60%, 96.50%, and 85.00% for good, bad, and outlier classes, respectively. Additionally, the accuracy of our suitable image detection approach was 98.08%. Our approach can be integrated into any automatic retinal analysis system with sufficient performance scores.
Chen, Yili; Fu, Jixiang; Chu, Dawei; Li, Rongmao; Xie, Yaoqin
2017-11-27
A retinal prosthesis is designed to help the blind to obtain some sight. It consists of an external part and an internal part. The external part is made up of a camera, an image processor and an RF transmitter. The internal part is made up of an RF receiver, implant chip and microelectrode. Currently, the number of microelectrodes is in the hundreds, and we do not know the mechanism for using an electrode to stimulate the optic nerve. A simple hypothesis is that the pixels in an image correspond to the electrode. The images captured by the camera should be processed by suitable strategies to correspond to stimulation from the electrode. Thus, it is a question of how to obtain the important information from the image captured in the picture. Here, we use the region of interest (ROI), a useful algorithm for extracting the ROI, to retain the important information, and to remove the redundant information. This paper explains the details of the principles and functions of the ROI. Because we are investigating a real-time system, we need a fast processing ROI as a useful algorithm to extract the ROI. Thus, we simplified the ROI algorithm and used it in an outside image-processing digital signal processing (DSP) system of the retinal prosthesis. The results show that our image-processing strategies are suitable for a real-time retinal prosthesis and can eliminate redundant information and provide useful information for expression in a low-size image.
IPL Processing of the Viking Orbiter Images of Mars
NASA Technical Reports Server (NTRS)
Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.
1977-01-01
The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.
Review methods for image segmentation from computed tomography images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik
Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affectmore » the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.« less
White-Light Optical Information Processing and Holography.
1984-06-22
Processing, Image Deblurring , Source Encoding, Signal Sampling, Coherence Measurement, Noise Performance, / Pseudocolor Encoding. , ’ ’ * .~ 10.ASS!RACT...o 2.1 Broad Spectral Band Color Image Deblurring .. . 4 2.2 Noise Performance ...... ...... .. . 4 2.3 Pseudocolor Encoding with Three Primary...spectra. This technique is particularly suitable for linear smeared color image deblurring . 2.2 Noise Performance In this period, we have also
Performance of InGaAs short wave infrared avalanche photodetector for low flux imaging
NASA Astrophysics Data System (ADS)
Singh, Anand; Pal, Ravinder
2017-11-01
Opto-electronic performance of the InGaAs/i-InGaAs/InP short wavelength infrared focal plane array suitable for high resolution imaging under low flux conditions and ranging is presented. More than 85% quantum efficiency is achieved in the optimized detector structure. Isotropic nature of the wet etching process poses a challenge in maintaining the required control in the small pitch high density detector array. Etching process is developed to achieve low dark current density of 1 nA/cm2 in the detector array with 25 µm pitch at 298 K. Noise equivalent photon performance less than one is achievable showing single photon detection capability. The reported photodiode with low photon flux is suitable for active cum passive imaging, optical information processing and quantum computing applications.
Lobster eye X-ray optics: Data processing from two 1D modules
NASA Astrophysics Data System (ADS)
Nentvich, O.; Urban, M.; Stehlikova, V.; Sieger, L.; Hudec, R.
2017-07-01
The X-ray imaging is usually done by Wolter I telescopes. They are suitable for imaging of a small part of the sky, not for all-sky monitoring. This monitoring could be done by a Lobster eye optics which can theoretically have a field of view up to 360 deg. All sky monitoring system enables a quick identification of source and its direction. This paper describes the possibility of using two independent one-dimensional Lobster Eye modules for this purpose instead of Wolter I and their post-processing into an 2D image. This arrangement allows scanning with less energy loss compared to Wolter I or two-dimensional Lobster Eye optics. It is most suitable especially for very weak sources.
Achromatical Optical Correlator
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Liu, Hua-Kuang
1989-01-01
Signal-to-noise ratio exceeds that of monochromatic correlator. Achromatical optical correlator uses multiple-pinhole diffraction of dispersed white light to form superposed multiple correlations of input and reference images in output plane. Set of matched spatial filters made by multiple-exposure holographic process, each exposure using suitably-scaled input image and suitable angle of reference beam. Recording-aperture mask translated to appropriate horizontal position for each exposure. Noncoherent illumination suitable for applications involving recognition of color and determination of scale. When fully developed achromatical correlators will be useful for recognition of patterns; for example, in industrial inspection and search for selected features in aerial photographs.
Prototype Focal-Plane-Array Optoelectronic Image Processor
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey
1995-01-01
Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.
Nighttime images fusion based on Laplacian pyramid
NASA Astrophysics Data System (ADS)
Wu, Cong; Zhan, Jinhao; Jin, Jicheng
2018-02-01
This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.
Massively parallel information processing systems for space applications
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1979-01-01
NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.
Dias, Philipe A; Dunkel, Thiemo; Fajado, Diego A S; Gallegos, Erika de León; Denecke, Martin; Wiedemann, Philipp; Schneider, Fabio K; Suhr, Hajo
2016-06-11
In the activated sludge process, problems of filamentous bulking and foaming can occur due to overgrowth of certain filamentous bacteria. Nowadays, these microorganisms are typically monitored by means of light microscopy, commonly combined with staining techniques. As drawbacks, these methods are susceptible to human errors, subjectivity and limited by the use of discontinuous microscopy. The in situ microscope appears as a suitable tool for continuous monitoring of filamentous bacteria, providing real-time examination, automated analysis and eliminating sampling, preparation and transport of samples. In this context, a proper image processing algorithm is proposed for automated recognition and measurement of filamentous objects. This work introduces a method for real-time evaluation of images without any staining, phase-contrast or dilution techniques, differently from studies present in the literature. Moreover, we introduce an algorithm which estimates the total extended filament length based on geodesic distance calculation. For a period of twelve months, samples from an industrial activated sludge plant were weekly collected and imaged without any prior conditioning, replicating real environment conditions. Trends of filament growth rate-the most important parameter for decision making-are correctly identified. For reference images whose filaments were marked by specialists, the algorithm correctly recognized 72 % of the filaments pixels, with a false positive rate of at most 14 %. An average execution time of 0.7 s per image was achieved. Experiments have shown that the designed algorithm provided a suitable quantification of filaments when compared with human perception and standard methods. The algorithm's average execution time proved its suitability for being optimally mapped into a computational architecture to provide real-time monitoring.
NASA Astrophysics Data System (ADS)
Zhou, Xiaohu; Neubauer, Franz; Zhao, Dong; Xu, Shichao
2015-01-01
The high-precision geometric correction of airborne hyperspectral remote sensing image processing was a hard nut to crack, and conventional methods of remote sensing image processing by selecting ground control points to correct the images are not suitable in the correction process of airborne hyperspectral image. The optical scanning system of an inertial measurement unit combined with differential global positioning system (IMU/DGPS) is introduced to correct the synchronous scanned Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing images. Posture parameters, which were synchronized with the OMIS II, were first obtained from the IMU/DGPS. Second, coordinate conversion and flight attitude parameters' calculations were conducted. Third, according to the imaging principle of OMIS II, mathematical correction was applied and the corrected image pixels were resampled. Then, better image processing results were achieved.
Towards an Optimal Interest Point Detector for Measurements in Ultrasound Images
NASA Astrophysics Data System (ADS)
Zukal, Martin; Beneš, Radek; Číka, Petr; Říha, Kamil
2013-12-01
This paper focuses on the comparison of different interest point detectors and their utilization for measurements in ultrasound (US) images. Certain medical examinations are based on speckle tracking which strongly relies on features that can be reliably tracked frame to frame. Only significant features (interest points) resistant to noise and brightness changes within US images are suitable for accurate long-lasting tracking. We compare three interest point detectors - Harris-Laplace, Difference of Gaussian (DoG) and Fast Hessian - and identify the most suitable one for use in US images on the basis of an objective criterion. Repeatability rate is assumed to be an objective quality measure for comparison. We have measured repeatability in images corrupted by different types of noise (speckle noise, Gaussian noise) and for changes in brightness. The Harris-Laplace detector outperformed its competitors and seems to be a sound option when choosing a suitable interest point detector for US images. However, it has to be noted that Fast Hessian and DoG detectors achieved better results in terms of processing speed.
Measurement of action spectra of light-activated processes
NASA Astrophysics Data System (ADS)
Ross, Justin; Zvyagin, Andrei V.; Heckenberg, Norman R.; Upcroft, Jacqui; Upcroft, Peter; Rubinsztein-Dunlop, Halina H.
2006-01-01
We report on a new experimental technique suitable for measurement of light-activated processes, such as fluorophore transport. The usefulness of this technique is derived from its capacity to decouple the imaging and activation processes, allowing fluorescent imaging of fluorophore transport at a convenient activation wavelength. We demonstrate the efficiency of this new technique in determination of the action spectrum of the light mediated transport of rhodamine 123 into the parasitic protozoan Giardia duodenalis.
NASA Technical Reports Server (NTRS)
Murray, N. D.
1985-01-01
Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
Segmentation-based L-filtering of speckle noise in ultrasonic images
NASA Astrophysics Data System (ADS)
Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis
1994-05-01
We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.
Dynamic feature analysis for Voyager at the Image Processing Laboratory
NASA Technical Reports Server (NTRS)
Yagi, G. M.; Lorre, J. J.; Jepsen, P. L.
1978-01-01
Voyager 1 and 2 were launched from Cape Kennedy to Jupiter, Saturn, and beyond on September 5, 1977 and August 20, 1977. The role of the Image Processing Laboratory is to provide the Voyager Imaging Team with the necessary support to identify atmospheric features (tiepoints) for Jupiter and Saturn data, and to analyze and display them in a suitable form. This support includes the software needed to acquire and store tiepoints, the hardware needed to interactively display images and tiepoints, and the general image processing environment necessary for decalibration and enhancement of the input images. The objective is an understanding of global circulation in the atmospheres of Jupiter and Saturn. Attention is given to the Voyager imaging subsystem, the Voyager imaging science objectives, hardware, software, display monitors, a dynamic feature study, decalibration, navigation, and data base.
Computer system for definition of the quantitative geometry of musculature from CT images.
Daniel, Matej; Iglic, Ales; Kralj-Iglic, Veronika; Konvicková, Svatava
2005-02-01
The computer system for quantitative determination of musculoskeletal geometry from computer tomography (CT) images has been developed. The computer system processes series of CT images to obtain three-dimensional (3D) model of bony structures where the effective muscle fibres can be interactively defined. Presented computer system has flexible modular structure and is suitable also for educational purposes.
Neural networks for data compression and invariant image recognition
NASA Technical Reports Server (NTRS)
Gardner, Sheldon
1989-01-01
An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.
Arrigoni, Simone; Turra, Giovanni; Signoroni, Alberto
2017-09-01
With the rapid diffusion of Full Laboratory Automation systems, Clinical Microbiology is currently experiencing a new digital revolution. The ability to capture and process large amounts of visual data from microbiological specimen processing enables the definition of completely new objectives. These include the direct identification of pathogens growing on culturing plates, with expected improvements in rapid definition of the right treatment for patients affected by bacterial infections. In this framework, the synergies between light spectroscopy and image analysis, offered by hyperspectral imaging, are of prominent interest. This leads us to assess the feasibility of a reliable and rapid discrimination of pathogens through the classification of their spectral signatures extracted from hyperspectral image acquisitions of bacteria colonies growing on blood agar plates. We designed and implemented the whole data acquisition and processing pipeline and performed a comprehensive comparison among 40 combinations of different data preprocessing and classification techniques. High discrimination performance has been achieved also thanks to improved colony segmentation and spectral signature extraction. Experimental results reveal the high accuracy and suitability of the proposed approach, driving the selection of most suitable and scalable classification pipelines and stimulating clinical validations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.
Ozaki, Nobuyuki
2002-07-01
This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.
Researching on the process of remote sensing video imagery
NASA Astrophysics Data System (ADS)
Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan
Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.
Spaceborne SAR Imaging Algorithm for Coherence Optimized.
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Microscope-integrated optical coherence tomography for image-aided positioning of glaucoma surgery
NASA Astrophysics Data System (ADS)
Li, Xiqi; Wei, Ling; Dong, Xuechuan; Huang, Ping; Zhang, Chun; He, Yi; Shi, Guohua; Zhang, Yudong
2015-07-01
Most glaucoma surgeries involve creating new aqueous outflow pathways with the use of a small surgical instrument. This article reported a microscope-integrated, real-time, high-speed, swept-source optical coherence tomography system (SS-OCT) with a 1310-nm light source for glaucoma surgery. A special mechanism was designed to produce an adjustable system suitable for use in surgery. A two-graphic processing unit architecture was used to speed up the data processing and real-time volumetric rendering. The position of the surgical instrument can be monitored and measured using the microscope and a grid-inserted image of the SS-OCT. Finally, experiments were simulated to assess the effectiveness of this integrated system. Experimental results show that this system is a suitable positioning tool for glaucoma surgery.
Assessing the impact of graphical quality on automatic text recognition in digital maps
NASA Astrophysics Data System (ADS)
Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang
2016-08-01
Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.
Multilayer mounting enables long-term imaging of zebrafish development in a light sheet microscope.
Kaufmann, Anna; Mickoleit, Michaela; Weber, Michael; Huisken, Jan
2012-09-01
Light sheet microscopy techniques, such as selective plane illumination microscopy (SPIM), are ideally suited for time-lapse imaging of developmental processes lasting several hours to a few days. The success of this promising technology has mainly been limited by the lack of suitable techniques for mounting fragile samples. Embedding zebrafish embryos in agarose, which is common in conventional confocal microscopy, has resulted in severe growth defects and unreliable results. In this study, we systematically quantified the viability and mobility of zebrafish embryos mounted under more suitable conditions. We found that tubes made of fluorinated ethylene propylene (FEP) filled with low concentrations of agarose or methylcellulose provided an optimal balance between sufficient confinement of the living embryo in a physiological environment over 3 days and optical clarity suitable for fluorescence imaging. We also compared the effect of different concentrations of Tricaine on the development of zebrafish and provide guidelines for its optimal use depending on the application. Our results will make light sheet microscopy techniques applicable to more fields of developmental biology, in particular the multiview long-term imaging of zebrafish embryos and other small organisms. Furthermore, the refinement of sample preparation for in toto and in vivo imaging will promote other emerging optical imaging techniques, such as optical projection tomography (OPT).
Recent advances in live cell imaging of hepatoma cells
2014-01-01
Live cell imaging enables the study of dynamic processes of living cells in real time by use of suitable reporter proteins and the staining of specific cellular structures and/or organelles. With the availability of advanced optical devices and improved cell culture protocols it has become a rapidly growing research methodology. The success of this technique relies mainly on the selection of suitable reporter proteins, construction of recombinant plasmids possessing cell type specific promoters as well as reliable methods of gene transfer. This review aims to provide an overview of the recent developments in the field of marker proteins (bioluminescence and fluorescent) and methodologies (fluorescent resonance energy transfer, fluorescent recovery after photobleaching and proximity ligation assay) employed as to achieve an improved imaging of biological processes in hepatoma cells. Moreover, different expression systems of marker proteins and the modes of gene transfer are discussed with emphasis on the study of lipid droplet formation in hepatocytes as an example. PMID:25005127
NASA Astrophysics Data System (ADS)
Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph
2016-09-01
CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
Raspberry Pi-powered imaging for plant phenotyping.
Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A
2018-03-01
Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.
High-performance image processing on the desktop
NASA Astrophysics Data System (ADS)
Jordan, Stephen D.
1996-04-01
The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.
Development of image analysis software for quantification of viable cells in microchips.
Georg, Maximilian; Fernández-Cabada, Tamara; Bourguignon, Natalia; Karp, Paola; Peñaherrera, Ana B; Helguera, Gustavo; Lerner, Betiana; Pérez, Maximiliano S; Mertelsmann, Roland
2018-01-01
Over the past few years, image analysis has emerged as a powerful tool for analyzing various cell biology parameters in an unprecedented and highly specific manner. The amount of data that is generated requires automated methods for the processing and analysis of all the resulting information. The software available so far are suitable for the processing of fluorescence and phase contrast images, but often do not provide good results from transmission light microscopy images, due to the intrinsic variation of the acquisition of images technique itself (adjustment of brightness / contrast, for instance) and the variability between image acquisition introduced by operators / equipment. In this contribution, it has been presented an image processing software, Python based image analysis for cell growth (PIACG), that is able to calculate the total area of the well occupied by cells with fusiform and rounded morphology in response to different concentrations of fetal bovine serum in microfluidic chips, from microscopy images in transmission light, in a highly efficient way.
A quality-refinement process for medical imaging applications.
Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I
2009-01-01
To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.
Application of optical character recognition in thermal image processing
NASA Astrophysics Data System (ADS)
Chan, W. T.; Sim, K. S.; Tso, C. P.
2011-07-01
This paper presents the results of a study on the reliability of the thermal imager compared to other devices that are used in preventive maintenance. Several case studies are used to facilitate the comparisons. When any device is found to perform unsatisfactorily where there is a suspected fault, its short-fall is determined so that the other devices may compensate, if possible. This study discovered that the thermal imager is not suitable or efficient enough for systems that happen to have little contrast in temperature between its parts or small but important parts that have their heat signatures obscured by those from other parts. The thermal imager is also found to be useful for preliminary examinations of certain systems, after which other more economical devices are suitable substitutes for further examinations. The findings of this research will be useful to the design and planning of preventive maintenance routines for industrial benefits.
A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer
NASA Astrophysics Data System (ADS)
Luckman, Adrian J.; Allinson, Nigel M.
1989-03-01
A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
NASA Technical Reports Server (NTRS)
Szepesi, Z.
1978-01-01
The fabrication process and transfer characteristics for solid state radiographic image transducers (radiographic amplifier screens) are described. These screens are for use in realtime nondestructive evaluation procedures that require large format radiographic images with contrast and resolution capabilities unavailable with conventional fluoroscopic screens. The screens are suitable for in-motion, on-line radiographic inspection by means of closed circuit television. Experimental effort was made to improve image quality and response to low energy (5 kV and up) X-rays.
Enhancing Ground Based Telescope Performance with Image Processing
2013-11-13
driven by the need to detect small faint objects with relatively short integration times to avoid streaking of the satellite image across multiple...the time right before the eclipse. The orbital elements of the satellite were entered into the SST’s tracking system, so that the SST could be...short integration times , thereby avoiding streaking of the satellite image across multiple CCD pixels so that the objects are suitably modeled as point
Hielscher, Andreas H; Bartel, Sebastian
2004-02-01
Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.
The Role of Motion Concepts in Understanding Non-Motion Concepts
Khatin-Zadeh, Omid; Banaruee, Hassan; Khoshsima, Hooshang; Marmolejo-Ramos, Fernando
2017-01-01
This article discusses a specific type of metaphor in which an abstract non-motion domain is described in terms of a motion event. Abstract non-motion domains are inherently different from concrete motion domains. However, motion domains are used to describe abstract non-motion domains in many metaphors. Three main reasons are suggested for the suitability of motion events in such metaphorical descriptions. Firstly, motion events usually have high degrees of concreteness. Secondly, motion events are highly imageable. Thirdly, components of any motion event can be imagined almost simultaneously within a three-dimensional space. These three characteristics make motion events suitable domains for describing abstract non-motion domains, and facilitate the process of online comprehension throughout language processing. Extending the main point into the field of mathematics, this article discusses the process of transforming abstract mathematical problems into imageable geometric representations within the three-dimensional space. This strategy is widely used by mathematicians to solve highly abstract and complex problems. PMID:29240715
NASA Technical Reports Server (NTRS)
Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.
1973-01-01
Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
Wavelet library for constrained devices
NASA Astrophysics Data System (ADS)
Ehlers, Johan Hendrik; Jassim, Sabah A.
2007-04-01
The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.
Parallel evolution of image processing tools for multispectral imagery
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.
2000-11-01
We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.
An image-processing methodology for extracting bloodstain pattern features.
Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G
2017-08-01
There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.
Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena
2013-01-01
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804
Computational burden resulting from image recognition of high resolution radar sensors.
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena
2013-04-22
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
NASA Astrophysics Data System (ADS)
Middleton, Maarit; Närhi, Paavo; Sutinen, Raimo
In a humid northern boreal climate, the success rate of artificial regeneration to Scots pine ( Pinus sylvestris L.) can be improved by including a soil water content (SWC) based assessment of site suitability in the reforestation planning process. This paper introduces an application of airborne visible-near-infrared imaging spectroscopic data to identify suitable subregions of forest compartments for the low SWC-tolerant Scots pine. The spatial patterns of understorey plant species communities, recorded by the AISA (Airborne Imaging Spectrometer for Applications) sensor, were demonstrated to be dependant on the underlying SWC. According to the nonmetric multidimensional scaling and correlation results twelve understorey species were found to be most abundant on sites with high soil SWCs. The abundance of bare soil, rocks and abundance of more than ten species indicated low soil SWCs. The spatial patterns of understorey are attributed to time-stability of the underlying SWC patterns. A supervised artificial neural network (radial basis functional link network, probabilistic neural network) approach was taken to classify AISA imaging spectrometer data with dielectric (as a measure volumetric SWC) ground referencing into regimes suitable and unsuitable for Scots pine. The accuracy assessment with receiver operating characteristics curves demonstrated a maximum of 74.1% area under the curve values which indicated moderate success of the NN modelling. The results signified the importance of the training set's quality, adequate quantity (>2.43 points/ha) and NN algorithm selection over the NN algorithm training parameter optimization to perfection. This methodology for the analysis of site suitability of Scots pine can be recommended, especially when artificial regeneration of former mixed wood Norway spruce ( Picea abies L. Karst) - downy birch ( Betula pubenscens Ehrh.) stands is being considered, so that artificially regenerated areas to Scots pine can be optimized for forestry purposes.
Medical Image Processing Server applied to Quality Control of Nuclear Medicine.
NASA Astrophysics Data System (ADS)
Vergara, C.; Graffigna, J. P.; Marino, E.; Omati, S.; Holleywell, P.
2016-04-01
This paper is framed within the area of medical image processing and aims to present the process of installation, configuration and implementation of a processing server of medical images (MIPS) in the Fundación Escuela de Medicina Nuclear located in Mendoza, Argentina (FUESMEN). It has been developed in the Gabinete de Tecnologia Médica (GA.TE.ME), Facultad de Ingeniería-Universidad Nacional de San Juan. MIPS is a software that using the DICOM standard, can receive medical imaging studies of different modalities or viewing stations, then it executes algorithms and finally returns the results to other devices. To achieve the objectives previously mentioned, preliminary tests were conducted in the laboratory. More over, tools were remotely installed in clinical enviroment. The appropiate protocols for setting up and using them in different services were established once defined those suitable algorithms. Finally, it’s important to focus on the implementation and training that is provided in FUESMEN, using nuclear medicine quality control processes. Results on implementation are exposed in this work.
Demonstration of a single-wavelength spectral-imaging-based Thai jasmine rice identification
NASA Astrophysics Data System (ADS)
Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan
2011-07-01
A single-wavelength spectral-imaging-based Thai jasmine rice breed identification is demonstrated. Our nondestructive identification approach relies on a combination of fluorescent imaging and simple image processing techniques. Especially, we apply simple image thresholding, blob filtering, and image subtracting processes to either a 545 or a 575nm image in order to identify our desired Thai jasmine rice breed from others. Other key advantages include no waste product and fast identification time. In our demonstration, UVC light is used as our exciting light, a liquid crystal tunable optical filter is used as our wavelength seclector, and a digital camera with 640activepixels×480activepixels is used to capture the desired spectral image. Eight Thai rice breeds having similar size and shape are tested. Our experimental proof of concept shows that by suitably applying image thresholding, blob filtering, and image subtracting processes to the selected fluorescent image, the Thai jasmine rice breed can be identified with measured false acceptance rates of <22.9% and <25.7% for spectral images at 545 and 575nm wavelengths, respectively. A measured fast identification time is 25ms, showing high potential for real-time applications.
A special vegetation index for the weed detection in sensor based precision agriculture.
Langner, Hans-R; Böttger, Hartmut; Schmidt, Helmut
2006-06-01
Many technologies in precision agriculture (PA) require image analysis and image- processing with weed and background differentiations. The detection of weeds on mulched cropland is one important image-processing task for sensor based precision herbicide applications. The article introduces a special vegetation index, the Difference Index with Red Threshold (DIRT), for the weed detection on mulched croplands. Experimental investigations in weed detection on mulched areas point out that the DIRT performs better than the Normalized Difference Vegetation Index (NDVI). The result of the evaluation with four different decision criteria indicate, that the new DIRT gives the highest reliability in weed/background differentiation on mulched areas. While using the same spectral bands (infrared and red) as the NDVI, the new DIRT is more suitable for weed detection than the other vegetation indices and requires only a small amount of additional calculation power. The new vegetation index DIRT was tested on mulched areas during automatic ratings with a special weed camera system. The test results compare the new DIRT and three other decision criteria: the difference between infrared and red intensity (Diff), the soil-adjusted quotient between infrared and red intensity (Quotient) and the NDVI. The decision criteria were compared with the definition of a worse case decision quality parameter Q, suitable for mulched croplands. Although this new index DIRT needs further testing, the index seems to be a good decision criterion for the weed detection on mulched areas and should also be useful for other image processing applications in precision agriculture. The weed detection hardware and the PC program for the weed image processing were developed with funds from the German Federal Ministry of Education and Research (BMBF).
Online image classification under monotonic decision boundary constraint
NASA Astrophysics Data System (ADS)
Lu, Cheng; Allebach, Jan; Wagner, Jerry; Pitta, Brandi; Larson, David; Guo, Yandong
2015-01-01
Image classification is a prerequisite for copy quality enhancement in all-in-one (AIO) device that comprises a printer and scanner, and which can be used to scan, copy and print. Different processing pipelines are provided in an AIO printer. Each of the processing pipelines is designed specifically for one type of input image to achieve the optimal output image quality. A typical approach to this problem is to apply Support Vector Machine to classify the input image and feed it to its corresponding processing pipeline. The online training SVM can help users to improve the performance of classification as input images accumulate. At the same time, we want to make quick decision on the input image to speed up the classification which means sometimes the AIO device does not need to scan the entire image to make a final decision. These two constraints, online SVM and quick decision, raise questions regarding: 1) what features are suitable for classification; 2) how we should control the decision boundary in online SVM training. This paper will discuss the compatibility of online SVM and quick decision capability.
NASA Astrophysics Data System (ADS)
Hollmach, Julia; Schweizer, Julia; Steiner, Gerald; Knels, Lilla; Funk, Richard H. W.; Thalheim, Silko; Koch, Edmund
2011-07-01
Retinal diseases like age-related macular degeneration have become an important cause of visual loss depending on increasing life expectancy and lifestyle habits. Due to the fact that no satisfying treatment exists, early diagnosis and prevention are the only possibilities to stop the degeneration. The protein cytochrome c (cyt c) is a suitable marker for degeneration processes and apoptosis because it is a part of the respiratory chain and involved in the apoptotic pathway. The determination of the local distribution and oxidative state of cyt c in living cells allows the characterization of cell degeneration processes. Since cyt c exhibits characteristic absorption bands between 400 and 650 nm wavelength, uv/vis in situ spectroscopic imaging was used for its characterization in retinal ganglion cells. The large amount of data, consisting of spatial and spectral information, was processed by multivariate data analysis. The challenge consists in the identification of the molecular information of cyt c. Baseline correction, principle component analysis (PCA) and cluster analysis (CA) were performed in order to identify cyt c within the spectral dataset. The combination of PCA and CA reveals cyt c and its oxidative state. The results demonstrate that uv/vis spectroscopic imaging in conjunction with sophisticated multivariate methods is a suitable tool to characterize cyt c under in situ conditions.
Recent development of nanoparticles for molecular imaging
NASA Astrophysics Data System (ADS)
Kim, Jonghoon; Lee, Nohyun; Hyeon, Taeghwan
2017-10-01
Molecular imaging enables us to non-invasively visualize cellular functions and biological processes in living subjects, allowing accurate diagnosis of diseases at early stages. For successful molecular imaging, a suitable contrast agent with high sensitivity is required. To date, various nanoparticles have been developed as contrast agents for medical imaging modalities. In comparison with conventional probes, nanoparticles offer several advantages, including controllable physical properties, facile surface modification and long circulation time. In addition, they can be integrated with various combinations for multimodal imaging and therapy. In this opinion piece, we highlight recent advances and future perspectives of nanomaterials for molecular imaging. This article is part of the themed issue 'Challenges for chemistry in molecular imaging'.
A Decision-Based Modified Total Variation Diffusion Method for Impulse Noise Removal
Zhu, Qingxin; Song, Xiuli; Tao, Jinsong
2017-01-01
Impulsive noise removal usually employs median filtering, switching median filtering, the total variation L1 method, and variants. These approaches however often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A new method to remove noise is proposed in this paper to overcome this limitation, which divides pixels into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the pixels are divided into corrupted and noise-free; if the image is corrupted by random valued impulses, the pixels are divided into corrupted, noise-free, and possibly corrupted. Pixels falling into different categories are processed differently. If a pixel is corrupted, modified total variation diffusion is applied; if the pixel is possibly corrupted, weighted total variation diffusion is applied; otherwise, the pixel is left unchanged. Experimental results show that the proposed method is robust to different noise strengths and suitable for different images, with strong noise removal capability as shown by PSNR/SSIM results as well as the visual quality of restored images. PMID:28536602
HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing
Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori
2018-01-01
Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022
Fabrication process for a gradient index x-ray lens
Bionta, R.M.; Makowiecki, D.M.; Skulina, K.M.
1995-01-17
A process is disclosed for fabricating high efficiency x-ray lenses that operate in the 0.5-4.0 keV region suitable for use in biological imaging, surface science, and x-ray lithography of integrated circuits. The gradient index x-ray optics fabrication process broadly involves co-sputtering multi-layers of film on a wire, followed by slicing and mounting on block, and then ion beam thinning to a thickness determined by periodic testing for efficiency. The process enables the fabrication of transmissive gradient index x-ray optics for the 0.5-4.0 keV energy range. This process allows the fabrication of optical elements for the next generation of imaging and x-ray lithography instruments in the soft x-ray region. 13 figures.
Fabrication process for a gradient index x-ray lens
Bionta, Richard M.; Makowiecki, Daniel M.; Skulina, Kenneth M.
1995-01-01
A process for fabricating high efficiency x-ray lenses that operate in the 0.5-4.0 keV region suitable for use in biological imaging, surface science, and x-ray lithography of integrated circuits. The gradient index x-ray optics fabrication process broadly involves co-sputtering multi-layers of film on a wire, followed by slicing and mounting on block, and then ion beam thinning to a thickness determined by periodic testing for efficiency. The process enables the fabrication of transmissive gradient index x-ray optics for the 0.5-4.0 keV energy range. This process allows the fabrication of optical elements for the next generation of imaging and x-ray lithography instruments m the soft x-ray region.
Pc-Based Floating Point Imaging Workstation
NASA Astrophysics Data System (ADS)
Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin
1989-07-01
The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.
NASA Astrophysics Data System (ADS)
Miller, N. C.; Lizarralde, D.; McGuire, J.; Hole, J. A.
2006-12-01
We consider methodologies, including survey design and processing algorithms, which are best suited to imaging vertical reflectors in oceanic crust using marine seismic techniques. The ability to image the reflectivity structure of transform faults as a function of depth, for example, may provide new insights into what controls seismicity along these plate boundaries. Turning-wave migration has been used with success to image vertical faults on land. With synthetic datasets we find that this approach has unique difficulties in the deep ocean. The fault-reflected crustal refraction phase (Pg-r) typically used in pre-stack migrations is difficult to isolate in marine seismic data. An "imagable" Pg-r is only observed in a time window between the first arrivals and arrivals from the sediments and the thick, slow water layer at offsets beyond ~25 km. Ocean- bottom seismometers (OBSs), as opposed to a long surface streamer, must be used to acquire data suitable for crustal-scale vertical imaging. The critical distance for Moho reflections (PmP) in oceanic crust is also ~25 km, thus Pg-r and PmP-r are observed with very little separation, and the fault-reflected mantle refraction (Pn-r) arrives prior to Pg-r as the observation window opens with increased OBS-to-fault distance. This situation presents difficulties for "first-arrival" based Kirchoff migration approaches and suggests that wave- equation approaches, which in theory can image all three phases simultaneously, may be more suitable for vertical imaging in oceanic crust. We will present a comparison of these approaches as applied to a synthetic dataset generated from realistic, stochastic velocity models. We will assess their suitability, the migration artifacts unique to the deep ocean, and the ideal instrument layout for such an experiment.
Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view
NASA Astrophysics Data System (ADS)
Cao, Tam P.; Deng, Guang; Elton, Darrell
2009-02-01
In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.
Three-dimensional real-time imaging of bi-phasic flow through porous media
NASA Astrophysics Data System (ADS)
Sharma, Prerna; Aswathi, P.; Sane, Anit; Ghosh, Shankar; Bhattacharya, S.
2011-11-01
We present a scanning laser-sheet video imaging technique to image bi-phasic flow in three-dimensional porous media in real time with pore-scale spatial resolution, i.e., 35 μm and 500 μm for directions parallel and perpendicular to the flow, respectively. The technique is illustrated for the case of viscous fingering. Using suitable image processing protocols, both the morphology and the movement of the two-fluid interface, were quantitatively estimated. Furthermore, a macroscopic parameter such as the displacement efficiency obtained from a microscopic (pore-scale) analysis demonstrates the versatility and usefulness of the method.
Bjorgan, Asgeir; Randeberg, Lise Lyngsnes
2015-01-01
Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717
Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method
Lu, Zhaolin
2017-01-01
Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis. PMID:28298925
Crossed-beam velocity map imaging of collisional autoionization processes
NASA Astrophysics Data System (ADS)
Delmdahl, Ralph F.; Bakker, Bernard L. G.; Parker, David H.
2000-11-01
Applying the velocity map imaging technique Penning ion formation as well as generation of associative ions is observed in autoionizing collisions of metastable neon atoms (Ne* 2p5 3s 3P2,0) with ground state argon targets in a crossed molecular beam experiment. Metastable neon reactants are obtained by nozzle expansion through a dc discharge ring. The quality of the obtained results clearly demonstrates the suitability of this new, particularly straightforward experimental approach with respect to angle and kinetic energy resolved investigations of Penning processes in crossed-beam studies which are known to provide the highest level of detail.
Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.
Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A
2016-04-01
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adaptive enhancement for nonuniform illumination images via nonlinear mapping
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Huang, Qian; Hu, Jing
2017-09-01
Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.
Kashiha, Mohammad Amin; Green, Angela R; Sales, Tatiana Glogerley; Bahr, Claudia; Berckmans, Daniel; Gates, Richard S
2014-10-01
Image processing systems have been widely used in monitoring livestock for many applications, including identification, tracking, behavior analysis, occupancy rates, and activity calculations. The primary goal of this work was to quantify image processing performance when monitoring laying hens by comparing length of stay in each compartment as detected by the image processing system with the actual occurrences registered by human observations. In this work, an image processing system was implemented and evaluated for use in an environmental animal preference chamber to detect hen navigation between 4 compartments of the chamber. One camera was installed above each compartment to produce top-view images of the whole compartment. An ellipse-fitting model was applied to captured images to detect whether the hen was present in a compartment. During a choice-test study, mean ± SD success detection rates of 95.9 ± 2.6% were achieved when considering total duration of compartment occupancy. These results suggest that the image processing system is currently suitable for determining the response measures for assessing environmental choices. Moreover, the image processing system offered a comprehensive analysis of occupancy while substantially reducing data processing time compared with the time-intensive alternative of manual video analysis. The above technique was used to monitor ammonia aversion in the chamber. As a preliminary pilot study, different levels of ammonia were applied to different compartments while hens were allowed to navigate between compartments. Using the automated monitor tool to assess occupancy, a negative trend of compartment occupancy with ammonia level was revealed, though further examination is needed. ©2014 Poultry Science Association Inc.
Image texture segmentation using a neural network
NASA Astrophysics Data System (ADS)
Sayeh, Mohammed R.; Athinarayanan, Ragu; Dhali, Pushpuak
1992-09-01
In this paper we use a neural network called the Lyapunov associative memory (LYAM) system to segment image texture into different categories or clusters. The LYAM system is constructed by a set of ordinary differential equations which are simulated on a digital computer. The clustering can be achieved by using a single tuning parameter in the simplest model. Pattern classes are represented by the stable equilibrium states of the system. Design of the system is based on synthesizing two local energy functions, namely, the learning and recall energy functions. Before the implementation of the segmentation process, a Gauss-Markov random field (GMRF) model is applied to the raw image. This application suitably reduces the image data and prepares the texture information for the neural network process. We give a simple image example illustrating the capability of the technique. The GMRF-generated features are also used for a clustering, based on the Euclidean distance.
Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing
Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang
2016-01-01
Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341
NASA Astrophysics Data System (ADS)
Lenkiewicz, Przemyslaw; Pereira, Manuela; Freire, Mário M.; Fernandes, José
2013-12-01
In this article, we propose a novel image segmentation method called the whole mesh deformation (WMD) model, which aims at addressing the problems of modern medical imaging. Such problems have raised from the combination of several factors: (1) significant growth of medical image volumes sizes due to increasing capabilities of medical acquisition devices; (2) the will to increase the complexity of image processing algorithms in order to explore new functionality; (3) change in processor development and turn towards multi processing units instead of growing bus speeds and the number of operations per second of a single processing unit. Our solution is based on the concept of deformable models and is characterized by a very effective and precise segmentation capability. The proposed WMD model uses a volumetric mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times, independently of image contents. The model also offers a good ability for topology changes and allows effective parallelization of workflow, which makes it a very good choice for large datasets. We present a precise model description, followed by experiments on artificial images and real medical data.
Design of biometrics identification system on palm vein using infrared light
NASA Astrophysics Data System (ADS)
Syafiq, Muhammad; Nasution, Aulia M. T.
2016-11-01
Image obtained by the LED with wavelength 740nm and 810nm showed that the contrast gradient of vein pattern is low and palm pattern still exist. It means that 740nm and 810nm are less suitable for the detection of blood vessels in the palm of the hand. At a wavelength of 940nm, the pattern is clearly visible, and the pattern of the palms is mostly gone. Furthermore, the pre-processing performed using smoothing process which include Gaussian filter and median filter and contrast stretching. Image segmentation is done by getting the ROI area that would be obtained its information. The identification process of image features obtained by using MSE (Mean Suare Error) method ,LBP (Local Binary Pattern). Furthermore, we will use a database consists of 5 different palm vein pattern which will be used for testing the tool in the identification process. All the process above are done using Raspberry Pi device. The Obtained MSE parameter is 0.025 and LBP features score are less than 10-3 for image to be matched.
Retrieval of land cover information under thin fog in Landsat TM image
NASA Astrophysics Data System (ADS)
Wei, Yuchun
2008-04-01
Thin fog, which often appears in remote sensing image of subtropical climate region, has resulted in the low image quantity and bad image mapping. Therefore, it is necessary to develop the image processing method to retrieve land cover information under thin fog. In this paper, the Landsat TM image near the Taihu Lake that is in the subtropical climate zone of China was used as an example, and the workflow and method used to retrieve the land cover information under thin fog have been built based on ENVI software and a single TM image. The basic step covers three parts: 1) isolating the thin fog area in image according to the spectral difference of different bands; 2) retrieving the visible band information of different land cover types under thin fog from the near-infrared bands according to the relationships between near-infrared bands and visible bands of different land cover types in the area without fog; 3) image post-process. The result showed that the method in the paper is easy and suitable, and can be used to improve the quantity of TM image mapping more effectively.
Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai
2018-01-01
In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.
The design and application of a multi-band IR imager
NASA Astrophysics Data System (ADS)
Li, Lijuan
2018-02-01
Multi-band IR imaging system has many applications in security, national defense, petroleum and gas industry, etc. So the relevant technologies are getting more and more attention in rent years. As we know, when used in missile warning and missile seeker systems, multi-band IR imaging technology has the advantage of high target recognition capability and low false alarm rate if suitable spectral bands are selected. Compared with traditional single band IR imager, multi-band IR imager can make use of spectral features in addition to space and time domain features to discriminate target from background clutters and decoys. So, one of the key work is to select the right spectral bands in which the feature difference between target and false target is evident and is well utilized. Multi-band IR imager is a useful instrument to collect multi-band IR images of target, backgrounds and decoys for spectral band selection study at low cost and with adjustable parameters and property compared with commercial imaging spectrometer. In this paper, a multi-band IR imaging system is developed which is suitable to collect 4 spectral band images of various scenes at every turn and can be expanded to other short-wave and mid-wave IR spectral bands combination by changing filter groups. The multi-band IR imaging system consists of a broad band optical system, a cryogenic InSb large array detector, a spinning filter wheel and electronic processing system. The multi-band IR imaging system's performance is tested in real data collection experiments.
Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode.
Shen, Shijian; Nie, Xin; Zhang, Xinggan
2018-02-03
Gaofen-3 (GF-3) is China' first C-band multi-polarization synthetic aperture radar (SAR) satellite, which also provides the sliding spotlight mode for the first time. Sliding-spotlight mode is a novel mode to realize imaging with not only high resolution, but also wide swath. Several key technologies for sliding spotlight mode in spaceborne SAR with high resolution are investigated in this paper, mainly including the imaging parameters, the methods of velocity estimation and ambiguity elimination, and the imaging algorithms. Based on the chosen Convolution BackProjection (CBP) and PFA (Polar Format Algorithm) imaging algorithms, a fast implementation method of CBP and a modified PFA method suitable for sliding spotlight mode are proposed, and the processing flows are derived in detail. Finally, the algorithms are validated by simulations and measured data.
NASA Astrophysics Data System (ADS)
Izzaty Riwayat, Akhtar; Nazri, Mohd Ariff Ahmad; Hazreek Zainal Abidin, Mohd
2018-04-01
In recent years, Electrical Resistivity Imaging (ERI) has become part of important method in preliminary stage as to gain more information in indicate the hidden water in underground layers. The problem faces by engineers is to determine the exact location of groundwater zone in subsurface layers. ERI seen as the most suitable tools in exploration of groundwater as this method have been applied in geotechnical and geo-environment investigation. This study was conducted using resistivity at UTHM campus to interpret the potential shallow aquifer and potential location for borehole as observation well. A Schlumberger array was setup during data acquisition as this array is capable in imaging deeper profile data and suitable for areas with homogeneous layer. The raw data was processed using RES2DINV software for 2D subsurface image. The result obtained indicate that the thickness of shallow aquifer for both spread line varies between 7.5 m to 15 m. The analysis of rest raw data using IP showed that the chargeability parameter is equal to 0 which strongly indicated the presence of groundwater aquifer in the study area.
Recent Applications of Neutron Imaging Methods
NASA Astrophysics Data System (ADS)
Lehmann, E.; Mannes, D.; Kaestner, A.; Grünzweig, C.
The methodical progress in the field of neutron imaging is visible in general but on different levels in the particular labs. Consequently, the access to most suitable beam ports, the usage of advanced imaging detector systems and the professional image processing made the technique competitive to other non-destructive tools like X-ray imaging. Based on this performance gain and by new methodical approaches several new application fields came up - in addition to the already established ones. Accordingly, new image data are now mostly in the third dimension available in the format of tomography volumes. The radiography mode is still the basis of neutron imaging, but the extracted information from superimposed image data (like for a grating interferometer) enables completely new insights. In the consequence, many new applications were created.
Novel iodinated tracers, MIBG and BMIPP, for nuclear cardiology.
Tamaki, Nagara; Yoshinaga, Keiichiro
2011-02-01
With the rapid growth of molecular biology, in vivo imaging of such molecular process (i.e., molecular imaging) has been well developed. The molecular imaging has been focused on justifying advanced treatments and for assessing the treatment effects. Most of molecular imaging has been developed using PET camera and suitable PET radiopharmaceuticals. However, this technique cannot be widely available and we need alternative approach. ¹²³I-labeled compounds have been also suitable for molecular imaging using single-photon computed tomography (SPECT) ¹²³I-labeled meta-iodobenzylguanidine (MIBG) has been used for assessing severity of heart failure and prognosis. In addition, it has a potential role to predict fatal arrhythmia, particularly for those who had and are planned to receive implantable cardioverter-defibrillator treatment. ¹²³I-beta-methyl-iodophenylpentadecanoic acid (BMIPP) plays an important role for identifying ischemia at rest, based on the unique capability to represent persistent metabolic alteration after recovery of ischemia, so called ischemic memory. Since BMIPP abnormalities may represent severe ischemia or jeopardized myocardium, it may permit risk analysis in CAD patients, particularly for those with chronic kidney disease and/or hemodialysis patients. This review will discuss about recent development of these important iodinated compounds.
Novel iodinated tracers, MIBG and BMIPP, for nuclear cardiology
Yoshinaga, Keiichiro
2010-01-01
With the rapid growth of molecular biology, in vivo imaging of such molecular process (i.e., molecular imaging) has been well developed. The molecular imaging has been focused on justifying advanced treatments and for assessing the treatment effects. Most of molecular imaging has been developed using PET camera and suitable PET radiopharmaceuticals. However, this technique cannot be widely available and we need alternative approach. 123I-labeled compounds have been also suitable for molecular imaging using single-photon computed tomography (SPECT) 123I-labeled meta-iodobenzylguanidine (MIBG) has been used for assessing severity of heart failure and prognosis. In addition, it has a potential role to predict fatal arrhythmia, particularly for those who had and are planned to receive implantable cardioverter-defibrillator treatment. 123I-beta-methyl-iodophenylpentadecanoic acid (BMIPP) plays an important role for identifying ischemia at rest, based on the unique capability to represent persistent metabolic alteration after recovery of ischemia, so called ischemic memory. Since BMIPP abnormalities may represent severe ischemia or jeopardized myocardium, it may permit risk analysis in CAD patients, particularly for those with chronic kidney disease and/or hemodialysis patients. This review will discuss about recent development of these important iodinated compounds. PMID:21082300
Use of lignocellulose materials as sorption media for phosphorus removal
K.G. Karthikeyan; Mandla A. Tshabalala; Dongmei Wang
2002-01-01
The suitability of modified bark or wood fiber derived from southern yellow pine to function as P sorbents was investigated. Sorbent preparation process included grinding, size fractionation] extraction for surface activation] and treatment with polyallylamine hydrochloride (PAA HCI) or 3-chloro-2-hydroxypropyltrimethlyammonium chloride. SEM images revealed surface...
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Portable laser speckle perfusion imaging system based on digital signal processor.
Tang, Xuejun; Feng, Nengyun; Sun, Xiaoli; Li, Pengcheng; Luo, Qingming
2010-12-01
The ability to monitor blood flow in vivo is of major importance in clinical diagnosis and in basic researches of life science. As a noninvasive full-field technique without the need of scanning, laser speckle contrast imaging (LSCI) is widely used to study blood flow with high spatial and temporal resolution. Current LSCI systems are based on personal computers for image processing with large size, which potentially limit the widespread clinical utility. The need for portable laser speckle contrast imaging system that does not compromise processing efficiency is crucial in clinical diagnosis. However, the processing of laser speckle contrast images is time-consuming due to the heavy calculation for enormous high-resolution image data. To address this problem, a portable laser speckle perfusion imaging system based on digital signal processor (DSP) and the algorithm which is suitable for DSP is described. With highly integrated DSP and the algorithm, we have markedly reduced the size and weight of the system as well as its energy consumption while preserving the high processing speed. In vivo experiments demonstrate that our portable laser speckle perfusion imaging system can obtain blood flow images at 25 frames per second with the resolution of 640 × 480 pixels. The portable and lightweight features make it capable of being adapted to a wide variety of application areas such as research laboratory, operating room, ambulance, and even disaster site.
High-density Schottky barrier IRCCD sensors for remote sensing applications
NASA Astrophysics Data System (ADS)
Elabd, H.; Tower, J. R.; McCarthy, B. M.
1983-01-01
It is pointed out that the ambitious goals envisaged for the next generation of space-borne sensors challenge the state-of-the-art in solid-state imaging technology. Studies are being conducted with the aim to provide focal plane array technology suitable for use in future Multispectral Linear Array (MLA) earth resource instruments. An important new technology for IR-image sensors involves the use of monolithic Schottky barrier infrared charge-coupled device arrays. This technology is suitable for earth sensing applications in which moderate quantum efficiency and intermediate operating temperatures are required. This IR sensor can be fabricated by using standard integrated circuit (IC) processing techniques, and it is possible to employ commercial IC grade silicon. For this reason, it is feasible to construct Schottky barrier area and line arrays with large numbers of elements and high-density designs. A Pd2Si Schottky barrier sensor for multispectral imaging in the 1 to 3.5 micron band is under development.
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-06-01
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
Imaging of breast cancer with mid- and long-wave infrared camera.
Joro, R; Lääperi, A-L; Dastidar, P; Soimakallio, S; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Järvenpää, R
2008-01-01
In this novel study the breasts of 15 women with palpable breast cancer were preoperatively imaged with three technically different infrared (IR) cameras - micro bolometer (MB), quantum well (QWIP) and photo voltaic (PV) - to compare their ability to differentiate breast cancer from normal tissue. The IR images were processed, the data for frequency analysis were collected from dynamic IR images by pixel-based analysis and from each image selectively windowed regional analysis was carried out, based on angiogenesis and nitric oxide production of cancer tissue causing vasomotor and cardiogenic frequency differences compared to normal tissue. Our results show that the GaAs QWIP camera and the InSb PV camera demonstrate the frequency difference between normal and cancerous breast tissue; the PV camera more clearly. With selected image processing operations more detailed frequency analyses could be applied to the suspicious area. The MB camera was not suitable for tissue differentiation, as the difference between noise and effective signal was unsatisfactory.
Networked vision system using a Prolog controller
NASA Astrophysics Data System (ADS)
Batchelor, B. G.; Caton, S. J.; Chatburn, L. T.; Crowther, R. A.; Miller, J. W. V.
2005-11-01
Prolog offers a very different style of programming compared to conventional languages; it can define object properties and abstract relationships in a way that Java, C, C++, etc. find awkward. In an accompanying paper, the authors describe how a distributed web-based vision systems can be built using elements that may even be located on different continents. One particular system of this general type is described here. The top-level controller is a Prolog program, which operates one, or more, image processing engines. This type of function is natural to Prolog, since it is able to reason logically using symbolic (non-numeric) data. Although Prolog is not suitable for programming image processing functions directly, it is ideal for analysing the results derived by an image processor. This article describes the implementation of two systems, in which a Prolog program controls several image processing engines, a simple robot, a pneumatic pick-and-place arm), LED illumination modules and a various mains-powered devices.
Using photoshop filters to create anatomic line-art medical images.
Kirsch, Jacobo; Geller, Brian S
2006-08-01
There are multiple ways to obtain anatomic drawings suitable for publication or presentations. This article demonstrates how to use Photoshop to alter digital radiologic images to create line-art illustrations in a quick and easy way. We present two simple to use methods; however, not every image can adequately be transformed and personal preferences and specific changes need to be applied to each image to obtain the desired result. There are multiple ways to obtain anatomic drawings suitable for publication or to prepare presentations. Medical illustrators have always played a major role in the radiology and medical education process. Whether used to teach a complex surgical or radiologic procedure, to define typical or atypical patterns of the spread of disease, or to illustrate normal or aberrant anatomy, medical illustration significantly affects learning (). However, if you are not an accomplished illustrator, the alternatives can be expensive (contacting a professional medical illustrator or buying an already existing stock of digital images) or simply not necessarily applicable to what you are trying to communicate. The purpose of this article is to demonstrate how using Photoshop (Adobe Systems, San Jose, CA) to alter digital radiologic images we can create line-art illustrations in a quick, inexpensive, and easy way in preparation for electronic presentations and publication.
Parallel processing approach to transform-based image coding
NASA Astrophysics Data System (ADS)
Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.
1991-06-01
This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.
Preparing images for publication: part 2.
Bengel, Wolfgang; Devigus, Alessandro
2006-08-01
The transition from conventional to digital photography presents many advantages for authors and photographers in the field of dentistry, but also many complexities and potential problems. No uniform procedures for authors and publishers exist at present for producing high-quality dental photographs. This two-part article aims to provide guidelines for preparing images for publication and improving communication between these two parties. Part 1 provided information about basic color principles, factors that can affect color perception, and digital color management. Part 2 describes the camera setup, discusses how to take a photograph suitable for publication, and outlines steps for the image editing process.
Impact of remote sensing upon the planning, management, and development of water resources
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L.; Fowler, T. R.; Frech, S. L.
1975-01-01
Principal water resources users were surveyed to determine the impact of remote data streams on hydrologic computer models. Analysis of responses demonstrated that: most water resources effort suitable to remote sensing inputs is conducted through federal agencies or through federally stimulated research; and, most hydrologic models suitable to remote sensing data are federally developed. Computer usage by major water resources users was analyzed to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era.
Photoacoustic imaging of lymphatic pumping
NASA Astrophysics Data System (ADS)
Forbrich, Alex; Heinmiller, Andrew; Zemp, Roger J.
2017-10-01
The lymphatic system is responsible for fluid homeostasis and immune cell trafficking and has been implicated in several diseases, including obesity, diabetes, and cancer metastasis. Despite its importance, the lack of suitable in vivo imaging techniques has hampered our understanding of the lymphatic system. This is, in part, due to the limited contrast of lymphatic fluids and structures. Photoacoustic imaging, in combination with optically absorbing dyes or nanoparticles, has great potential for noninvasively visualizing the lymphatic vessels deep in tissues. Multispectral photoacoustic imaging is capable of separating the components; however, the slow wavelength switching speed of most laser systems is inadequate for imaging lymphatic pumping without motion artifacts being introduced into the processed images. We investigate two approaches for visualizing lymphatic processes in vivo. First, single-wavelength differential photoacoustic imaging is used to visualize lymphatic pumping in the hindlimb of a mouse in real time. Second, a fast-switching multiwavelength photoacoustic imaging system was used to assess the propulsion profile of dyes through the lymphatics in real time. These approaches may have profound impacts in noninvasively characterizing and investigating the lymphatic system.
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev
2017-02-01
Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.
Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight
NASA Technical Reports Server (NTRS)
Suorsa, Raymond; Sridhar, Banavar
1991-01-01
A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.
iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM
Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.
2011-01-01
iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445
Review of image processing fundamentals
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1985-01-01
Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.
A new approach towards image based virtual 3D city modeling by using close range photogrammetry
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2014-05-01
3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.
A modeling analysis program for the JPL Table Mountain Io sodium cloud data
NASA Technical Reports Server (NTRS)
Smyth, W. H.; Goldberg, B. A.
1986-01-01
Progress and achievements in the second year are discussed in three main areas: (1) data quality review of the 1981 Region B/C images; (2) data processing activities; and (3) modeling activities. The data quality review revealed that almost all 1981 Region B/C images are of sufficient quality to be valuable in the analyses of the JPL data set. In the second area, the major milestone reached was the successful development and application of complex image-processing software required to render the original image data suitable for modeling analysis studies. In the third area, the lifetime description of sodium atoms in the planet magnetosphere was improved in the model to include the offset dipole nature of the magnetic field as well as an east-west electric field. These improvements are important in properly representing the basic morphology as well as the east-west asymmetries of the sodium cloud.
Image fusion algorithm based on energy of Laplacian and PCNN
NASA Astrophysics Data System (ADS)
Li, Meili; Wang, Hongmei; Li, Yanjun; Zhang, Ke
2009-12-01
Owing to the global coupling and pulse synchronization characteristic of pulse coupled neural networks (PCNN), it has been proved to be suitable for image processing and successfully employed in image fusion. However, in almost all the literatures of image processing about PCNN, linking strength of each neuron is assigned the same value which is chosen by experiments. This is not consistent with the human vision system in which the responses to the region with notable features are stronger than that to the region with nonnotable features. It is more reasonable that notable features, rather than the same value, are employed to linking strength of each neuron. As notable feature, energy of Laplacian (EOL) is used to obtain the value of linking strength in PCNN in this paper. Experimental results demonstrate that the proposed algorithm outperforms Laplacian-based, wavelet-based, PCNN -based fusion algorithms.
Blurred Star Image Processing for Star Sensors under Dynamic Conditions
Zhang, Weina; Quan, Wei; Guo, Lei
2012-01-01
The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666
High-resolution, continuous field-of-view (FOV), non-rotating imaging system
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)
2010-01-01
A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.
Detection of cracks on concrete surfaces by hyperspectral image processing
NASA Astrophysics Data System (ADS)
Santos, Bruno O.; Valença, Jonatas; Júlio, Eduardo
2017-06-01
All large infrastructures worldwide must have a suitable monitoring and maintenance plan, aiming to evaluate their behaviour and predict timely interventions. In the particular case of concrete infrastructures, the detection and characterization of crack patterns is a major indicator of their structural response. In this scope, methods based on image processing have been applied and presented. Usually, methods focus on image binarization followed by applications of mathematical morphology to identify cracks on concrete surface. In most cases, publications are focused on restricted areas of concrete surfaces and in a single crack. On-site, the methods and algorithms have to deal with several factors that interfere with the results, namely dirt and biological colonization. Thus, the automation of a procedure for on-site characterization of crack patterns is of great interest. This advance may result in an effective tool to support maintenance strategies and interventions planning. This paper presents a research based on the analysis and processing of hyper-spectral images for detection and classification of cracks on concrete structures. The objective of the study is to evaluate the applicability of several wavelengths of the electromagnetic spectrum for classification of cracks in concrete surfaces. An image survey considering highly discretized wavelengths between 425 nm and 950 nm was performed on concrete specimens, with bandwidths of 25 nm. The concrete specimens were produced with a crack pattern induced by applying a load with displacement control. The tests were conducted to simulate usual on-site drawbacks. In this context, the surface of the specimen was subjected to biological colonization (leaves and moss). To evaluate the results and enhance crack patterns a clustering method, namely k-means algorithm, is being applied. The research conducted allows to define the suitability of using clustering k-means algorithm combined with hyper-spectral images highly discretized for crack detection on concrete surfaces, considering cracking combined with the most usual concrete anomalies, namely biological colonization.
NASA Astrophysics Data System (ADS)
Liu, Tao; Zhang, Wei; Yan, Shaoze
2015-10-01
In this paper, a multi-scale image enhancement algorithm based on low-passing filtering and nonlinear transformation is proposed for infrared testing image of the de-bonding defect in solid propellant rocket motors. Infrared testing images with high-level noise and low contrast are foundations for identifying defects and calculating the defects size. In order to improve quality of the infrared image, according to distribution properties of the detection image, within framework of stationary wavelet transform, the approximation coefficients at suitable decomposition level is processed by index low-passing filtering by using Fourier transform, after that, the nonlinear transformation is applied to further process the figure to improve the picture contrast. To verify validity of the algorithm, the image enhancement algorithm is applied to infrared testing pictures of two specimens with de-bonding defect. Therein, one specimen is made of a type of high-strength steel, and the other is a type of carbon fiber composite. As the result shown, in the images processed by the image enhancement algorithm presented in the paper, most of noises are eliminated, and contrast between defect areas and normal area is improved greatly; in addition, by using the binary picture of the processed figure, the continuous defect edges can be extracted, all of which show the validity of the algorithm. The paper provides a well-performing image enhancement algorithm for the infrared thermography.
"Hungry Eyes": Visual Processing of Food Images in Adults with Prader-Willi Syndrome
ERIC Educational Resources Information Center
Key, A. P. F.; Dykens, E. M.
2008-01-01
Background: Prader-Willi syndrome (PWS) is a genetic disorder associated with intellectual disabilities, compulsivity, hyperphagia and increased risks of life-threatening obesity. Food preferences in people with PWS are well documented, but research has yet to focus on other properties of food in PWS, including composition and suitability for…
Ultrafast high-repetition imaging of fuel sprays using picosecond fiber laser.
Purwar, Harsh; Wang, Hongjie; Tang, Mincheng; Idlahcen, Saïd; Rozé, Claude; Blaisot, Jean-Bernard; Godin, Thomas; Hideur, Ammar
2015-12-28
Modern diesel injectors operate at very high injection pressures of about 2000 bar resulting in injection velocities as high as 700 m/s near the nozzle outlet. In order to better predict the behavior of the atomization process at such high pressures, high-resolution spray images at high repetition rates must be recorded. However, due to extremely high velocity in the near-nozzle region, high-speed cameras fail to avoid blurring of the structures in the spray images due to their exposure time. Ultrafast imaging featuring ultra-short laser pulses to freeze the motion of the spray appears as an well suited solution to overcome this limitation. However, most commercial high-energy ultrafast sources are limited to a few kHz repetition rates. In the present work, we report the development of a custom-designed picosecond fiber laser generating ∼ 20 ps pulses with an average power of 2.5 W at a repetition rate of 8.2 MHz, suitable for high-speed imaging of high-pressure fuel jets. This fiber source has been proof tested by obtaining backlight images of diesel sprays issued from a single-orifice injector at an injection pressure of 300 bar. We observed a consequent improvement in terms of image resolution compared to standard white-light illumination. In addition, the compactness and stability against perturbations of our fiber laser system makes it particularly suitable for harsh experimental conditions.
Global image analysis to determine suitability for text-based image personalization
NASA Astrophysics Data System (ADS)
Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Bouman, Charles A.; Allebach, Jan P.
2012-03-01
Lately, image personalization is becoming an interesting topic. Images with variable elements such as text usually appear much more appealing to the recipients. In this paper, we describe a method to pre-analyze the image and automatically suggest to the user the most suitable regions within an image for text-based personalization. The method is based on input gathered from experiments conducted with professional designers. It has been observed that regions that are spatially smooth and regions with existing text (e.g. signage, banners, etc.) are the best candidates for personalization. This gives rise to two sets of corresponding algorithms: one for identifying smooth areas, and one for locating text regions. Furthermore, based on the smooth and text regions found in the image, we derive an overall metric to rate the image in terms of its suitability for personalization (SFP).
Tissue Cartography: Compressing Bio-Image Data by Dimensional Reduction
Heemskerk, Idse; Streichan, Sebastian J
2017-01-01
High data volumes produced by state-of-the-art optical microscopes encumber research. Taking advantage of the laminar structure of many biological specimens we developed a method that reduces data size and processing time by orders of magnitude, while disentangling signal. The Image Surface Analysis Environment that we implemented automatically constructs an atlas of 2D images for arbitrary shaped, dynamic, and possibly multi-layered “Surfaces of Interest”. Built-in correction for cartographic distortion assures no information on the surface is lost, making it suitable for quantitative analysis. We demonstrate our approach by application to 4D imaging of the D. melanogaster embryo and D. rerio beating heart. PMID:26524242
An Improved Filtering Method for Quantum Color Image in Frequency Domain
NASA Astrophysics Data System (ADS)
Li, Panchi; Xiao, Hong
2018-01-01
In this paper we investigate the use of quantum Fourier transform (QFT) in the field of image processing. We consider QFT-based color image filtering operations and their applications in image smoothing, sharpening, and selective filtering using quantum frequency domain filters. The underlying principle used for constructing the proposed quantum filters is to use the principle of the quantum Oracle to implement the filter function. Compared with the existing methods, our method is not only suitable for color images, but also can flexibly design the notch filters. We provide the quantum circuit that implements the filtering task and present the results of several simulation experiments on color images. The major advantages of the quantum frequency filtering lies in the exploitation of the efficient implementation of the quantum Fourier transform.
Moore, Christopher; Marchant, Thomas
2017-07-12
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Marchant, Thomas
2017-08-01
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
Apparatus for monitoring crystal growth
Sachs, Emanual M.
1981-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Method of monitoring crystal growth
Sachs, Emanual M.
1982-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Image Registration of High-Resolution Uav Data: the New Hypare Algorithm
NASA Astrophysics Data System (ADS)
Bahr, T.; Jin, X.; Lasica, R.; Giessel, D.
2013-08-01
Unmanned aerial vehicles play an important role in the present-day civilian and military intelligence. Equipped with a variety of sensors, such as SAR imaging modes, E/O- and IR sensor technology, they are due to their agility suitable for many applications. Hence, the necessity arises to use fusion technologies and to develop them continuously. Here an exact image-to-image registration is essential. It serves as the basis for important image processing operations such as georeferencing, change detection, and data fusion. Therefore we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of 39 still images from a high-resolution image stream, acquired with a Aeryon Photo3S™ camera on an Aeryon Scout micro-UAV™.
A software platform for the analysis of dermatology images
NASA Astrophysics Data System (ADS)
Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon
2017-11-01
The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.
Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S
2016-07-01
Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.
NASA Astrophysics Data System (ADS)
Taylor, Christopher T.; Hutchinson, Simon; Salmon, Neil A.; Wilkinson, Peter N.; Cameron, Colin D.
2014-06-01
Image processing techniques can be used to improve the cost-effectiveness of future interferometric Passive MilliMetre Wave (PMMW) imagers. The implementation of such techniques will allow for a reduction in the number of collecting elements whilst ensuring adequate image fidelity is maintained. Various techniques have been developed by the radio astronomy community to enhance the imaging capability of sparse interferometric arrays. The most prominent are Multi- Frequency Synthesis (MFS) and non-linear deconvolution algorithms, such as the Maximum Entropy Method (MEM) and variations of the CLEAN algorithm. This investigation focuses on the implementation of these methods in the defacto standard for radio astronomy image processing, the Common Astronomy Software Applications (CASA) package, building upon the discussion presented in Taylor et al., SPIE 8362-0F. We describe the image conversion process into a CASA suitable format, followed by a series of simulations that exploit the highlighted deconvolution and MFS algorithms assuming far-field imagery. The primary target application used for this investigation is an outdoor security scanner for soft-sided Heavy Goods Vehicles. A quantitative analysis of the effectiveness of the aforementioned image processing techniques is presented, with thoughts on the potential cost-savings such an approach could yield. Consideration is also given to how the implementation of these techniques in CASA might be adapted to operate in a near-field target environment. This may enable a much wider usability by the imaging community outside of radio astronomy and thus would be directly relevant to portal screening security systems in the microwave and millimetre wave bands.
NASA Astrophysics Data System (ADS)
Zhao, Feng; Frietman, Edward E. E.; Han, Zhong; Chen, Ray T.
1999-04-01
A characteristic feature of a conventional von Neumann computer is that computing power is delivered by a single processing unit. Although increasing the clock frequency improves the performance of the computer, the switching speed of the semiconductor devices and the finite speed at which electrical signals propagate along the bus set the boundaries. Architectures containing large numbers of nodes can solve this performance dilemma, with the comment that main obstacles in designing such systems are caused by difficulties to come up with solutions that guarantee efficient communications among the nodes. Exchanging data becomes really a bottleneck should al nodes be connected by a shared resource. Only optics, due to its inherent parallelism, could solve that bottleneck. Here, we explore a multi-faceted free space image distributor to be used in optical interconnects in massively parallel processing. In this paper, physical and optical models of the image distributor are focused on from diffraction theory of light wave to optical simulations. the general features and the performance of the image distributor are also described. The new structure of an image distributor and the simulations for it are discussed. From the digital simulation and experiment, it is found that the multi-faceted free space image distributing technique is quite suitable for free space optical interconnection in massively parallel processing and new structure of the multifaceted free space image distributor would perform better.
Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration.
Nikitichev, Daniil I; Shakir, Dzhoshkun I; Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom
2017-02-23
We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community.
Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration
Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom
2017-01-01
We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community. PMID:28287588
Schmithorst, Vincent J; Brown, Rhonda Douglas
2004-07-01
The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.
Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun
2016-01-01
Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287
An evaluation of the directed flow graph methodology
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Rajala, S. A.
1984-01-01
The applicability of the Directed Graph Methodology (DGM) to the design and analysis of special purpose image and signal processing hardware was evaluated. A special purpose image processing system was designed and described using DGM. The design, suitable for very large scale integration (VLSI) implements a region labeling technique. Two computer chips were designed, both using metal-nitride-oxide-silicon (MNOS) technology, as well as a functional system utilizing those chips to perform real time region labeling. The system is described in terms of DGM primitives. As it is currently implemented, DGM is inappropriate for describing synchronous, tightly coupled, special purpose systems. The nature of the DGM formalism lends itself more readily to modeling networks of general purpose processors.
A Design Verification of the Parallel Pipelined Image Processings
NASA Astrophysics Data System (ADS)
Wasaki, Katsumi; Harai, Toshiaki
2008-11-01
This paper presents a case study of the design and verification of a parallel and pipe-lined image processing unit based on an extended Petri net, which is called a Logical Colored Petri net (LCPN). This is suitable for Flexible-Manufacturing System (FMS) modeling and discussion of structural properties. LCPN is another family of colored place/transition-net(CPN) with the addition of the following features: integer value assignment of marks, representation of firing conditions as marks' value based formulae, and coupling of output procedures with transition firing. Therefore, to study the behavior of a system modeled with this net, we provide a means of searching the reachability tree for markings.
Dense real-time stereo matching using memory efficient semi-global-matching variant based on FPGAs
NASA Astrophysics Data System (ADS)
Buder, Maximilian
2012-06-01
This paper presents a stereo image matching system that takes advantage of a global image matching method. The system is designed to provide depth information for mobile robotic applications. Typical tasks of the proposed system are to assist in obstacle avoidance, SLAM and path planning. Mobile robots pose strong requirements about size, energy consumption, reliability and output quality of the image matching subsystem. Current available systems either rely on active sensors or on local stereo image matching algorithms. The first are only suitable in controlled environments while the second suffer from low quality depth-maps. Top ranking quality results are only achieved by an iterative approach using global image matching and color segmentation techniques which are computationally demanding and therefore difficult to be executed in realtime. Attempts were made to still reach realtime performance with global methods by simplifying the routines. The depth maps are at the end almost comparable to local methods. An equally named semi-global algorithm was proposed earlier that shows both very good image matching results and relatively simple operations. A memory efficient variant of the Semi-Global-Matching algorithm is reviewed and adopted for an implementation based on reconfigurable hardware. The implementation is suitable for realtime execution in the field of robotics. It will be shown that the modified version of the efficient Semi-Global-Matching method is delivering equivalent result compared to the original algorithm based on the Middlebury dataset. The system has proven to be capable of processing VGA sized images with a disparity resolution of 64 pixel at 33 frames per second based on low cost to mid-range hardware. In case the focus is shifted to a higher image resolution, 1024×1024-sized stereo frames may be processed with the same hardware at 10 fps. The disparity resolution settings stay unchanged. A mobile system that covers preprocessing, matching and interfacing operations is also presented.
Short cavity active mode locking fiber laser for optical sensing and imaging
NASA Astrophysics Data System (ADS)
Lee, Hwi Don; Han, Ga Hee; Jeong, Syung Won; Jeong, Myung Yung; Kim, Chang-Seok; Shin, Jun Geun; Lee, Byeong Ha; Eom, Tae Joong
2014-05-01
We demonstrate a highly linear wavenumber- swept active mode locking (AML) fiber laser for optical sensing and imaging without any wavenumber-space resampling process. In this all-electric AML wavenumber-swept mechanism, a conventional wavelength selection filter is eliminated and, instead, the suitable programmed electric modulation signal is directly applied to the gain medium. Various types of wavenumber (or wavelength) tunings can be implemented because of the filter-less cavity configuration. Therefore, we successfully demonstrate a linearly wavenumber-swept AML fiber laser with 26.5 mW of output power to obtain an in-vivo OCT image at the 100 kHz swept rate.
Image smoothing and enhancement via min/max curvature flow
NASA Astrophysics Data System (ADS)
Malladi, Ravikanth; Sethian, James A.
1996-03-01
We present a class of PDE-based algorithms suitable for a wide range of image processing applications. The techniques are applicable to both salt-and-pepper gray-scale noise and full- image continuous noise present in black and white images, gray-scale images, texture images and color images. At the core, the techniques rely on a level set formulation of evolving curves and surfaces and the viscosity in profile evolution. Essentially, the method consists of moving the isointensity contours in an image under curvature dependent speed laws to achieve enhancement. Compared to existing techniques, our approach has several distinct advantages. First, it contains only one enhancement parameter, which in most cases is automatically chosen. Second, the scheme automatically stops smoothing at some optimal point; continued application of the scheme produces no further change. Third, the method is one of the fastest possible schemes based on a curvature-controlled approach.
Microlens array processor with programmable weight mask and direct optical input
NASA Astrophysics Data System (ADS)
Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen
1999-03-01
We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.
Sonnenaufnahmen mit einer Starlight X-Press CCD-Kamera.
NASA Astrophysics Data System (ADS)
Bernhard, K.
1997-03-01
To take-up the sun, most amateurastronomers use the photographic method. In this article the author shows, that CCD is also a suitable method to do this. Especially the possibilities to see immediately the result of focusing on the screen and the electronic processing are very useful to get sharp and high-contrast images of the sun.
Wei, Liping.; Doughan, Samer.; Han, Yi.; DaCosta, Matthew V.; Krull, Ulrich J.; Ho, Derek.
2014-01-01
Organic fluorophores and quantum dots are ubiquitous as contrast agents for bio-imaging and as labels in bioassays to enable the detection of biological targets and processes. Upconversion nanoparticles (UCNPs) offer a different set of opportunities as labels in bioassays and for bioimaging. UCNPs are excited at near-infrared (NIR) wavelengths where biological molecules are optically transparent, and their luminesce in the visible and ultraviolet (UV) wavelength range is suitable for detection using complementary metal-oxide-semiconductor (CMOS) technology. These nanoparticles provide multiple sharp emission bands, long lifetimes, tunable emission, high photostability, and low cytotoxicity, which render them particularly useful for bio-imaging applications and multiplexed bioassays. This paper surveys several key concepts surrounding upconversion nanoparticles and the systems that detect and process the corresponding luminescence signals. The principle of photon upconversion, tuning of emission wavelengths, UCNP bioassays, and UCNP time-resolved techniques are described. Electronic readout systems for signal detection and processing suitable for UCNP luminescence using CMOS technology are discussed. This includes recent progress in miniaturized detectors, integrated spectral sensing, and high-precision time-domain circuits. Emphasis is placed on the physical attributes of UCNPs that map strongly to the technical features that CMOS devices excel in delivering, exploring the interoperability between the two technologies. PMID:25211198
Secure distribution for high resolution remote sensing images
NASA Astrophysics Data System (ADS)
Liu, Jin; Sun, Jing; Xu, Zheng Q.
2010-09-01
The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.
Exemplar-Based Image Inpainting Using a Modified Priority Definition.
Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le
2015-01-01
Exemplar-based algorithms are a popular technique for image inpainting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem, we introduce an independent strategy through investigating the process of patches propagation in this paper. We first define a new separated priority definition to propagate geometry and then synthesize image textures, aiming to well recover image geometry and textures. In addition, an automatic algorithm is designed to estimate steps for the new separated priority definition. Comparing with some competitive approaches, the new priority definition can recover image geometry and textures well.
Exemplar-Based Image Inpainting Using a Modified Priority Definition
Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le
2015-01-01
Exemplar-based algorithms are a popular technique for image inpainting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem, we introduce an independent strategy through investigating the process of patches propagation in this paper. We first define a new separated priority definition to propagate geometry and then synthesize image textures, aiming to well recover image geometry and textures. In addition, an automatic algorithm is designed to estimate steps for the new separated priority definition. Comparing with some competitive approaches, the new priority definition can recover image geometry and textures well. PMID:26492491
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-03-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-01-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521
Mathematical models used in segmentation and fractal methods of 2-D ultrasound images
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Moraru, Luminita; Bibicu, Dorin
2012-11-01
Mathematical models are widely used in biomedical computing. The extracted data from images using the mathematical techniques are the "pillar" achieving scientific progress in experimental, clinical, biomedical, and behavioural researches. This article deals with the representation of 2-D images and highlights the mathematical support for the segmentation operation and fractal analysis in ultrasound images. A large number of mathematical techniques are suitable to be applied during the image processing stage. The addressed topics cover the edge-based segmentation, more precisely the gradient-based edge detection and active contour model, and the region-based segmentation namely Otsu method. Another interesting mathematical approach consists of analyzing the images using the Box Counting Method (BCM) to compute the fractal dimension. The results of the paper provide explicit samples performed by various combination of methods.
Study of the urban evolution of Brasilia with the use of LANDSAT data
NASA Technical Reports Server (NTRS)
Deoliveira, M. D. N. (Principal Investigator); Foresti, C.; Niero, M.; Parreiras, E. M. D. F.
1984-01-01
The urban growth of Brasilia within the last ten years is analyzed with special emphasis on the utilization of remote sensing orbital data and automatic image processing. The urban spatial structure and the monitoring of its temporal changes were focused in a whole and dynamic way by the utilization of MSS-LANDSAT images for June 1973, 1978 and 1983. In order to aid data interpretation, a registration algorithm implemented at the Interactive Multispectral Image Analysis System (IMAGE-100) was utilized aiming at the overlap of multitemporal images. The utilization of suitable digital filters, combined with the images overlap, allowed a rapid identification of areas of possible urban growth and oriented the field work. The results obtained permitted an evaluation of the urban growth of Brasilia, taking as reference the proposed stated for the construction of the city.
Content-addressable read/write memories for image analysis
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Savage, C. D.
1982-01-01
The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.
Crone, Damien L; Bode, Stefan; Murawski, Carsten; Laham, Simon M
2018-01-01
A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/.
NASA Astrophysics Data System (ADS)
Tesařová, M.; Zikmund, T.; Kaucká, M.; Adameyko, I.; Jaroš, J.; Paloušek, D.; Škaroupka, D.; Kaiser, J.
2016-03-01
Imaging of increasingly complex cartilage in vertebrate embryos is one of the key tasks of developmental biology. This is especially important to study shape-organizing processes during initial skeletal formation and growth. Advanced imaging techniques that are reflecting biological needs give a powerful impulse to push the boundaries of biological visualization. Recently, techniques for contrasting tissues and organs have improved considerably, extending traditional 2D imaging approaches to 3D . X-ray micro computed tomography (μCT), which allows 3D imaging of biological objects including their internal structures with a resolution in the micrometer range, in combination with contrasting techniques seems to be the most suitable approach for non-destructive imaging of embryonic developing cartilage. Despite there are many software-based ways for visualization of 3D data sets, having a real solid model of the studied object might give novel opportunities to fully understand the shape-organizing processes in the developing body. In this feasibility study we demonstrated the full procedure of creating a real 3D object of mouse embryo nasal capsule, i.e. the staining, the μCT scanning combined by the advanced data processing and the 3D printing.
Comparison of ring artifact removal methods using flat panel detector based CT images
2011-01-01
Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411
Fundamental performance differences between CMOS and CCD imagers: Part II
NASA Astrophysics Data System (ADS)
Janesick, James; Andrews, James; Tower, John; Grygon, Mark; Elliott, Tom; Cheng, John; Lesser, Michael; Pinter, Jeff
2007-09-01
A new class of CMOS imagers that compete with scientific CCDs is presented. The sensors are based on deep depletion backside illuminated technology to achieve high near infrared quantum efficiency and low pixel cross-talk. The imagers deliver very low read noise suitable for single photon counting - Fano-noise limited soft x-ray applications. Digital correlated double sampling signal processing necessary to achieve low read noise performance is analyzed and demonstrated for CMOS use. Detailed experimental data products generated by different pixel architectures (notably 3TPPD, 5TPPD and 6TPG designs) are presented including read noise, charge capacity, dynamic range, quantum efficiency, charge collection and transfer efficiency and dark current generation. Radiation damage data taken for the imagers is also reported.
A Study of Flood Evacuation Center Using GIS and Remote Sensing Technique
NASA Astrophysics Data System (ADS)
Mustaffa, A. A.; Rosli, M. F.; Abustan, M. S.; Adib, R.; Rosli, M. I.; Masiri, K.; Saifullizan, B.
2016-07-01
This research demonstrated the use of Remote Sensing technique and GIS to determine the suitability of an evacuation center. This study was conducted in Batu Pahat areas that always hit by a series of flood. The data of Digital Elevation Model (DEM) was obtained by ASTER database that has been used to delineate extract contour line and elevation. Landsat 8 image was used for classification purposes such as land use map. Remote Sensing incorporate with GIS techniques was used to determined the suitability location of the evacuation center from contour map of flood affected areas in Batu Pahat. GIS will calculate the elevation of the area and information about the country of the area, the road access and percentage of the affected area. The flood affected area map may provide the suitability of the flood evacuation center during the several levels of flood. The suitability of evacuation centers can be determined based on several criteria and the existing data of the evacuation center will be analysed. From the analysis among 16 evacuation center listed, there are only 8 evacuation center suitable for the usage during emergency situation. The suitability analysis was based on the location and the road access of the evacuation center toward the flood affected area. There are 10 new locations with suitable criteria of evacuation center proposed on the study area to facilitate the process of rescue and evacuating flood victims to much safer and suitable locations. The results of this study will help in decision making processes and indirectly will help organization such as fire-fighter and the Department of Social Welfare in their work. Thus, this study can contribute more towards the society.
Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong
2014-12-01
The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Three-dimensional imaging using phase retrieval with two focus planes
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh; Meir, Rinat; Zalevsky, Zeev
2016-03-01
This work presents a technique for a full 3D imaging of biological samples tagged with gold-nanoparticles (GNPs) using only two images, rather than many images per volume as is currently needed for 3D optical sectioning microscopy. The proposed approach is based on the Gerchberg-Saxton (GS) phase retrieval algorithm. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. In addition, since the method requires the capturing of two images only, it can be suitable for 3D live cell imaging. The proposed concept is presented and validated both on simulated data as well as experimentally.
The ImageJ ecosystem: an open platform for biomedical image analysis
Schindelin, Johannes; Rueden, Curtis T.; Hiner, Mark C.; Eliceiri, Kevin W.
2015-01-01
Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available – from commercial to academic, special-purpose to Swiss army knife, small to large–but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts life science, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368
The ImageJ ecosystem: An open platform for biomedical image analysis.
Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W
2015-01-01
Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.
Fast variogram analysis of remotely sensed images in HPC environment
NASA Astrophysics Data System (ADS)
Pesquer, Lluís; Cortés, Anna; Masó, Joan; Pons, Xavier
2013-04-01
Exploring and describing spatial variation of images is one of the main applications of geostatistics to remote sensing. The variogram is a very suitable tool to carry out this spatial pattern analysis. Variogram analysis is composed of two steps: empirical variogram generation and fitting a variogram model. The empirical variogram generation is a very quick procedure for most analyses of irregularly distributed samples, but time consuming increases quite significantly for remotely sensed images, because number of samples (pixels) involved is usually huge (more than 30 million for a Landsat TM scene), basically depending on extension and spatial resolution of images. In several remote sensing applications this type of analysis is repeated for each image, sometimes hundreds of scenes and sometimes for each radiometric band (high number in the case of hyperspectral images) so that there is a need for a fast implementation. In order to reduce this high execution time, we carried out a parallel solution of the variogram analyses. The solution adopted is the master/worker programming paradigm in which the master process distributes and coordinates the tasks executed by the worker processes. The code is written in ANSI-C language, including MPI (Message Passing Interface) as a message-passing library in order to communicate the master with the workers. This solution (ANSI-C + MPI) guarantees portability between different computer platforms. The High Performance Computing (HPC) environment is formed by 32 nodes, each with two Dual Core Intel(R) Xeon (R) 3.0 GHz processors with 12 Gb of RAM, communicated with integrated dual gigabit Ethernet. This IBM cluster is located in the research laboratory of the Computer Architecture and Operating Systems Department of the Universitat Autònoma de Barcelona. The performance results for a 15km x 15km subcene of 198-31 path-row Landsat TM image are shown in table 1. The proximity between empirical speedup behaviour and theoretical linear speedup confirms a suitable parallel design and implementation applied. N Workers Time (s) Speedup 0 2975.03 2 2112.33 1.41 4 1067.45 2.79 8 534.18 5.57 12 357.54 8.32 16 269.00 11.06 20 216.24 13.76 24 186.31 15.97 Furthermore, very similar performance results are obtained for CASI images (hyperspectral and finer spatial resolution than Landsat), showed in table 2, and demonstrating that the distributed load design is not specifically defined and optimized for unique type of images, but it is a flexible design that maintains a good balance and scalability suitable for different range of image dimensions. N Workers Time (s) Speedup 0 5485.03 2 3847.47 1.43 4 1921.62 2.85 8 965.55 5.68 12 644.26 8.51 16 483.40 11.35 20 393.67 13.93 24 347.15 15.80 28 306.33 17.91 32 304.39 18.02 Finally, we conclude that this significant time reduction underlines the utility of distributed environments for processing large amount of data as remotely sensed images.
Barua, Animesh; Yellapa, Aparna; Bahr, Janice M; Machado, Sergio A; Bitterman, Pincas; Basu, Sanjib; Sharma, Sameer; Abramowicz, Jacques S
2015-07-01
Tumor-associated neoangiogenesis (TAN) is an early event in ovarian cancer (OVCA) development. Increased expression of vascular endothelial growth factor receptor 2 (VEGFR2) by TAN vessels presents a potential target for early detection by ultrasound imaging. The goal of this study was to examine the suitability of VEGFR2-targeted ultrasound contrast agents in detecting spontaneous OVCA in laying hens. Effects of VEGFR2-targeted contrast agents in enhancing the intensity of ultrasound imaging from spontaneous ovarian tumors in hens were examined in a cross-sectional study. Enhancement in the intensity of ultrasound imaging was determined before and after injection of VEGFR2-targeted contrast agents. All ultrasound images were digitally stored and analyzed off-line. Following scanning, ovarian tissues were collected and processed for histology and detection of VEGFR2-expressing microvessels. Enhancement in visualization of ovarian morphology was detected by gray-scale imaging following injection of VEGFR2-targeted contrast agents. Compared with pre-contrast, contrast imaging enhanced the intensities of ultrasound imaging significantly (p < 0.0001) irrespective of the pathological status of ovaries. In contrast to normal hens, the intensity of ultrasound imaging was significantly (p < 0.0001) higher in hens with early stage OVCA and increased further in hens with late stage OVCA. Higher intensities of ultrasound imaging in hens with OVCA were positively correlated with increased (p < 0.0001) frequencies of VEGFR2-expressing microvessels. The results of this study suggest that VEGFR2-targeted contrast agents enhance the visualization of spontaneous ovarian tumors in hens at early and late stages of OVCA. The laying hen may be a suitable model to test new imaging agents and develop targeted therapeutics. © The Author(s) 2014.
Automatic patient alignment system using 3D ultrasound.
Kaar, Marcus; Figl, Michael; Hoffmann, Rainer; Birkfellner, Wolfgang; Stock, Markus; Georg, Dietmar; Goldner, Gregor; Hummel, Johann
2013-04-01
Recent developments in radiation therapy such as intensity modulated radiotherapy (IMRT) or dose painting promise to provide better dose distribution on the tumor. For effective application of these methods the exact positioning of the patient and the localization of the irradiated organ and surrounding structures is crucial. Especially with respect to the treatment of the prostate, ultrasound (US) allows for differentiation between soft tissue and was therefore applied by various repositioning systems, such as BAT or Clarity. The authors built a new system which uses 3D US at both sites, the CT room and the intervention room and applied a 3D/3D US/US registration for automatic repositioning. In a first step the authors applied image preprocessing methods to prepare the US images for an optimal registration process. For the 3D/3D registration procedure five different metrics were evaluated. To find the image metric which fits best for a particular patient three 3D US images were taken at the CT site and registered to each other. From these results an US registration error was calculated. The most successful image metric was then applied for the US/US registration process. The success of the whole repositioning method was assessed by taking the results of an ExacTrac system as golden standard. The US/US registration error was found to be 2.99 ± 1.54 mm with respect to the mutual information metric by Mattes (eleven patients) which revealed to be the most suitable of the assessed metrics. For complete repositioning chain the error amounted to 4.15 ± 1.20 mm (ten patients). The authors developed a system for patient repositioning which works automatically without the necessity of user interaction with an accuracy which seems to be suitable for clinical application.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2017-08-01
Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.
Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.
Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas
2017-10-01
We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-01-01
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-08-19
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.
NASA Astrophysics Data System (ADS)
Ströhl, Florian; Wong, Hovy H. W.; Holt, Christine E.; Kaminski, Clemens F.
2018-01-01
Fluorescence anisotropy imaging microscopy (FAIM) measures the depolarization properties of fluorophores to deduce molecular changes in their environment. For successful FAIM, several design principles have to be considered and a thorough system-specific calibration protocol is paramount. One important calibration parameter is the G factor, which describes the system-induced errors for different polarization states of light. The determination and calibration of the G factor is discussed in detail in this article. We present a novel measurement strategy, which is particularly suitable for FAIM with high numerical aperture objectives operating in TIRF illumination mode. The method makes use of evanescent fields that excite the sample with a polarization direction perpendicular to the image plane. Furthermore, we have developed an ImageJ/Fiji plugin, AniCalc, for FAIM data processing. We demonstrate the capabilities of our TIRF-FAIM system by measuring β -actin polymerization in human embryonic kidney cells and in retinal neurons.
Poly(vinyl alcohol) cryogel phantoms for use in ultrasound and MR imaging
NASA Astrophysics Data System (ADS)
Surry, K. J. M.; Austin, H. J. B.; Fenster, A.; Peters, T. M.
2004-12-01
Poly(vinyl alcohol) cryogel, PVA-C, is presented as a tissue-mimicking material, suitable for application in magnetic resonance (MR) imaging and ultrasound imaging. A 10% by weight poly(vinyl alcohol) in water solution was used to form PVA-C, which is solidified through a freeze-thaw process. The number of freeze-thaw cycles affects the properties of the material. The ultrasound and MR imaging characteristics were investigated using cylindrical samples of PVA-C. The speed of sound was found to range from 1520 to 1540 m s-1, and the attenuation coefficients were in the range of 0.075-0.28 dB (cm MHz)-1. T1 and T2 relaxation values were found to be 718-1034 ms and 108-175 ms, respectively. We also present applications of this material in an anthropomorphic brain phantom, a multi-volume stenosed vessel phantom and breast biopsy phantoms. Some suggestions are made for how best to handle this material in the phantom design and development process.
ExoSOFT: Exoplanet Simple Orbit Fitting Toolbox
NASA Astrophysics Data System (ADS)
Mede, Kyle; Brandt, Timothy D.
2017-08-01
ExoSOFT provides orbital analysis of exoplanets and binary star systems. It fits any combination of astrometric and radial velocity data, and offers four parameter space exploration techniques, including MCMC. It is packaged with an automated set of post-processing and plotting routines to summarize results, and is suitable for performing orbital analysis during surveys with new radial velocity and direct imaging instruments.
A Video Lecture and Lab-Based Approach for Learning of Image Processing Concepts
ERIC Educational Resources Information Center
Chiu, Chiung-Fang; Lee, Greg C.
2009-01-01
The current practice of traditional in-class lecture for learning computer science (CS) in the high schools of Taiwan is in need of revamping. Teachers instruct on the use of commercial software instead of teaching CS concepts to students. The lack of more suitable teaching materials and limited classroom time are the main reasons for the…
Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S
2016-12-01
We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
NASA Technical Reports Server (NTRS)
Campbell, W. J.; Goldberg, M.
1982-01-01
NASA's Eastern Regional Remote Sensing Applications Center (ERRSAC) has recognized the need to accommodate spatial analysis techniques in its remote sensing technology transfer program. A computerized Geographic Information System to incorporate remotely sensed data, specifically Landsat, with other relevant data was considered a realistic approach to address a given resource problem. Questions arose concerning the selection of a suitable available software system to demonstrate, train, and undertake demonstration projects with ERRSAC's user community. The very specific requirements for such a system are discussed. The solution found involved the addition of geographic information processing functions to the Interactive Digital Image Manipulation System (IDIMS). Details regarding the functions of the new integrated system are examined along with the characteristics of the software.
Buried Man-made Structure Imaging using 2-D Resistivity Inversion
NASA Astrophysics Data System (ADS)
Anderson Bery, Andy; Nordiana, M. M.; El Hidayah Ismail, Noer; Jinmin, M.; Nur Amalina, M. K. A.
2018-04-01
This study is carried out with the objective to determine the suitable resistivity inversion method for buried man-made structure (bunker). This study was carried out with two stages. The first stage is suitable array determination using 2-D computerized modeling method. One suitable array is used for the infield resistivity survey to determine the dimension and location of the target. The 2-D resistivity inversion results showed that robust inversion method is suitable to resolve the top and bottom part of the buried bunker as target. In addition, the dimension of the buried bunker is successfully determined with height of 7 m and length of 20 m. The location of this target is located at -10 m until 10 m of the infield resistivity survey line. The 2-D resistivity inversion results obtained in this study showed that the parameters selection is important in order to give the optimum results. These parameters are array type, survey geometry and inversion method used in data processing.
Unsupervised tattoo segmentation combining bottom-up and top-down cues
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen
2011-06-01
Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.
Single Photon Counting Large Format Imaging Sensors with High Spatial and Temporal Resolution
NASA Astrophysics Data System (ADS)
Siegmund, O. H. W.; Ertley, C.; Vallerga, J. V.; Cremer, T.; Craven, C. A.; Lyashenko, A.; Minot, M. J.
High time resolution astronomical and remote sensing applications have been addressed with microchannel plate based imaging, photon time tagging detector sealed tube schemes. These are being realized with the advent of cross strip readout techniques with high performance encoding electronics and atomic layer deposited (ALD) microchannel plate technologies. Sealed tube devices up to 20 cm square have now been successfully implemented with sub nanosecond timing and imaging. The objective is to provide sensors with large areas (25 cm2 to 400 cm2) with spatial resolutions of <20 μm FWHM and timing resolutions of <100 ps for dynamic imaging. New high efficiency photocathodes for the visible regime are discussed, which also allow response down below 150nm for UV sensing. Borosilicate MCPs are providing high performance, and when processed with ALD techniques are providing order of magnitude lifetime improvements and enhanced photocathode stability. New developments include UV/visible photocathodes, ALD MCPs, and high resolution cross strip anodes for 100 mm detectors. Tests with 50 mm format cross strip readouts suitable for Planacon devices show spatial resolutions better than 20 μm FWHM, with good image linearity while using low gain ( 106). Current cross strip encoding electronics can accommodate event rates of >5 MHz and event timing accuracy of 100 ps. High-performance ASIC versions of these electronics are in development with better event rate, power and mass suitable for spaceflight instruments.
A low cost imaging displacement measurement system for spacecraft thermal vacuum testing
NASA Technical Reports Server (NTRS)
Dempsey, Brian
2006-01-01
A low cost imaging displacement technique suitable for use in thermal vacuum testing was built and tested during thermal vacuum testing of the space infrared telescope facility (SIRTF, later renamed Spitzer infrared telescope facility). The problem was to measure the relative displacement of different portions of the spacecraft due to thermal expansion or contraction. Standard displacement measuring instrumentation could not be used because of the widely varying temperatures on the spacecraft and for fear of invalidating the thermal vacuum testing. The imaging system was conceived, designed, purchased, and installed in approximately 2 months at very low cost. The system performed beyond expectations proving that sub millimeter displacements could be measured from over 2 meters away. Using commercial optics it was possible to make displacement measurements down to 10 (mu)m. An automated image processing tool was used to process the data, which not only speeded up data reduction, but showed that velocities and accelerations could also be measured. Details of the design and capabilities of the system are discussed along with the results of the test on the observatory. Several images from the actual test are presented.
Direct-Solve Image-Based Wavefront Sensing
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
2009-01-01
A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; ...
2016-07-26
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifyingin situchamber design. This approach was demonstrated with Au nanoparticles and will enable,more » for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.« less
Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy
NASA Astrophysics Data System (ADS)
Ford, Tim N.; Mertz, Jerome
2013-06-01
Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.
Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy.
Ford, Tim N; Mertz, Jerome
2013-06-01
Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment.
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; Xu, Ruqing; Fuoss, Paul H; Hruszkewycz, Stephan O
2016-09-01
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifying in situ chamber design. This approach was demonstrated with Au nanoparticles and will enable, for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.
Landsat 8 Multispectral and Pansharpened Imagery Processing on the Study of Civil Engineering Issues
NASA Astrophysics Data System (ADS)
Lazaridou, M. A.; Karagianni, A. Ch.
2016-06-01
Scientific and professional interests of civil engineering mainly include structures, hydraulics, geotechnical engineering, environment, and transportation issues. Topics included in the context of the above may concern urban environment issues, urban planning, hydrological modelling, study of hazards and road construction. Land cover information contributes significantly on the study of the above subjects. Land cover information can be acquired effectively by visual image interpretation of satellite imagery or after applying enhancement routines and also by imagery classification. The Landsat Data Continuity Mission (LDCM - Landsat 8) is the latest satellite in Landsat series, launched in February 2013. Landsat 8 medium spatial resolution multispectral imagery presents particular interest in extracting land cover, because of the fine spectral resolution, the radiometric quantization of 12bits, the capability of merging the high resolution panchromatic band of 15 meters with multispectral imagery of 30 meters as well as the policy of free data. In this paper, Landsat 8 multispectral and panchromatic imageries are being used, concerning surroundings of a lake in north-western Greece. Land cover information is extracted, using suitable digital image processing software. The rich spectral context of the multispectral image is combined with the high spatial resolution of the panchromatic image, applying image fusion - pansharpening, facilitating in this way visual image interpretation to delineate land cover. Further processing concerns supervised image classification. The classification of pansharpened image preceded multispectral image classification. Corresponding comparative considerations are also presented.
NASA Astrophysics Data System (ADS)
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki
2017-02-01
Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.
An interactive medical image segmentation framework using iterative refinement.
Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay
2017-04-01
Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.
Polarization-based compensation of astigmatism.
Chowdhury, Dola Roy; Bhattacharya, Kallol; Chakraborty, Ajay K; Ghosh, Raja
2004-02-01
One approach to aberration compensation of an imaging system is to introduce a suitable phase mask at the aperture plane of an imaging system. We utilize this principle for the compensation of astigmatism. A suitable polarization mask used on the aperture plane together with a polarizer-retarder combination at the input of the imaging system provides the compensating polarization-induced phase steps at different quadrants of the apertures masked by different polarizers. The aberrant phase can be considerably compensated by the proper choice of a polarization mask and suitable selection of the polarization parameters involved. The results presented here bear out our theoretical expectation.
Application of LANDSAT data to the study of urban development in Brasilia
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Deoliveira, M. D. L. N.; Foresti, C.; Niero, M.; Parreira, E. M. D. M. F.
1984-01-01
The urban growth of Brasilia within the last ten years is analyzed with special emphasis on the utilization of remote sensing orbital data and automatic image processing. The urban spatial structure and the monitoring of its temporal changes were examined in a whole and dynamic way by the utilization of MSS-LANDSAT images for June (1973, 1978 and 1983). In order to aid data interpretation, a registration algorithm implemented in the Interactive Multispectral Image Analysis System (IMAGE-100) was utilized aiming at the overlap of multitemporal images. The utilization of suitable digital filters, combined with the images overlap, allowed a rapid identification of areas of possible urban growth and oriented the field work. The results obtained in this work permitted an evaluation of the urban growth of Brasilia, taking as reference the proposal stated for the construction of the city in the Pilot Plan elaborated by Lucio Costa.
NASA Astrophysics Data System (ADS)
Takehara, Hironari; Miyazawa, Kazuya; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Kim, Soo Hyeon; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun
2014-01-01
A CMOS image sensor with stacked photodiodes was fabricated using 0.18 µm mixed signal CMOS process technology. Two photodiodes were stacked at the same position of each pixel of the CMOS image sensor. The stacked photodiodes consist of shallow high-concentration N-type layer (N+), P-type well (PW), deep N-type well (DNW), and P-type substrate (P-sub). PW and P-sub were shorted to ground. By monitoring the voltage of N+ and DNW individually, we can observe two monochromatic colors simultaneously without using any color filters. The CMOS image sensor is suitable for fluorescence imaging, especially contact imaging such as a lensless observation system of digital enzyme-linked immunosorbent assay (ELISA). Since the fluorescence increases with time in digital ELISA, it is possible to observe fluorescence accurately by calculating the difference from the initial relation between the pixel values for both photodiodes.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation
NASA Astrophysics Data System (ADS)
Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.
2018-04-01
The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.
A Minimal Optical Trapping and Imaging Microscopy System
Hernández Candia, Carmen Noemí; Tafoya Martínez, Sara; Gutiérrez-Medina, Braulio
2013-01-01
We report the construction and testing of a simple and versatile optical trapping apparatus, suitable for visualizing individual microtubules (∼25 nm in diameter) and performing single-molecule studies, using a minimal set of components. This design is based on a conventional, inverted microscope, operating under plain bright field illumination. A single laser beam enables standard optical trapping and the measurement of molecular displacements and forces, whereas digital image processing affords real-time sample visualization with reduced noise and enhanced contrast. We have tested our trapping and imaging instrument by measuring the persistence length of individual double-stranded DNA molecules, and by following the stepping of single kinesin motor proteins along clearly imaged microtubules. The approach presented here provides a straightforward alternative for studies of biomaterials and individual biomolecules. PMID:23451216
GENIE: a hybrid genetic algorithm for feature classification in multispectral images
NASA Astrophysics Data System (ADS)
Perkins, Simon J.; Theiler, James P.; Brumby, Steven P.; Harvey, Neal R.; Porter, Reid B.; Szymanski, John J.; Bloch, Jeffrey J.
2000-10-01
We consider the problem of pixel-by-pixel classification of a multi- spectral image using supervised learning. Conventional spuervised classification techniques such as maximum likelihood classification and less conventional ones s uch as neural networks, typically base such classifications solely on the spectral components of each pixel. It is easy to see why: the color of a pixel provides a nice, bounded, fixed dimensional space in which these classifiers work well. It is often the case however, that spectral information alone is not sufficient to correctly classify a pixel. Maybe spatial neighborhood information is required as well. Or maybe the raw spectral components do not themselves make for easy classification, but some arithmetic combination of them would. In either of these cases we have the problem of selecting suitable spatial, spectral or spatio-spectral features that allow the classifier to do its job well. The number of all possible such features is extremely large. How can we select a suitable subset? We have developed GENIE, a hybrid learning system that combines a genetic algorithm that searches a space of image processing operations for a set that can produce suitable feature planes, and a more conventional classifier which uses those feature planes to output a final classification. In this paper we show that the use of a hybrid GA provides significant advantages over using either a GA alone or more conventional classification methods alone. We present results using high-resolution IKONOS data, looking for regions of burned forest and for roads.
Finding an Optimal Thermo-Mechanical Processing Scheme for a Gum-Type Ti-Nb-Zr-Fe-O Alloy
NASA Astrophysics Data System (ADS)
Nocivin, Anna; Cojocaru, Vasile Danut; Raducanu, Doina; Cinca, Ion; Angelescu, Maria Lucia; Dan, Ioan; Serban, Nicolae; Cojocaru, Mirela
2017-09-01
A gum-type alloy was subjected to a thermo-mechanical processing scheme to establish a suitable process for obtaining superior structural and behavioural characteristics. Three processes were proposed: a homogenization treatment, a cold-rolling process and a solution treatment with three heating temperatures: 1073 K (800 °C), 1173 K (900 °C) and 1273 K (1000 °C). Results of all three proposed processes were analyzed using x-ray diffraction and scanning electron microscopy imaging, to establish and compare the structural modifications. The behavioural status was completed with micro-hardness and tensile strength tests. The optimal results were obtained for solution treatment at 1073 K.
Enhancing scattering images for orientation recovery with diffusion map
Winter, Martin; Saalmann, Ulf; Rost, Jan M.
2016-02-12
We explore the possibility for orientation recovery in single-molecule coherent diffractive imaging with diffusion map. This algorithm approximates the Laplace-Beltrami operator, which we diagonalize with a metric that corresponds to the mapping of Euler angles onto scattering images. While suitable for images of objects with specific properties we show why this approach fails for realistic molecules. Here, we introduce a modification of the form factor in the scattering images which facilitates the orientation recovery and should be suitable for all recovery algorithms based on the distance of individual images. (C) 2016 Optical Society of America
Fast processing of microscopic images using object-based extended depth of field.
Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades
2016-12-22
Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.
Heterobimetallic Complexes for Theranostic Applications.
Fernández-Moreira, Vanesa; Gimeno, M Concepción
2018-03-07
The design of more efficient anticancer drugs requires a deeper understanding of their biodistribution and mechanism of action. Cell imaging agents could help to gain insight into biological processes and, consequently, the best strategy for attaining suitable scaffolds in which both biological and imaging properties are maximized. A new concept arises in this field that is the combination of two metal fragments as collaborative partners to provide the precise emissive properties to visualize the cell as well as the optimum cytotoxic activity to build more potent and selective chemotherapeutic agents. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Detecting brain tumor in pathological slides using hyperspectral imaging
Ortega, Samuel; Fabelo, Himar; Camacho, Rafael; de la Luz Plaza, María; Callicó, Gustavo M.; Sarmiento, Roberto
2018-01-01
Hyperspectral imaging (HSI) is an emerging technology for medical diagnosis. This research work presents a proof-of-concept on the use of HSI data to automatically detect human brain tumor tissue in pathological slides. The samples, consisting of hyperspectral cubes collected from 400 nm to 1000 nm, were acquired from ten different patients diagnosed with high-grade glioma. Based on the diagnosis provided by pathologists, a spectral library of normal and tumor tissues was created and processed using three different supervised classification algorithms. Results prove that HSI is a suitable technique to automatically detect high-grade tumors from pathological slides. PMID:29552415
Detecting brain tumor in pathological slides using hyperspectral imaging.
Ortega, Samuel; Fabelo, Himar; Camacho, Rafael; de la Luz Plaza, María; Callicó, Gustavo M; Sarmiento, Roberto
2018-02-01
Hyperspectral imaging (HSI) is an emerging technology for medical diagnosis. This research work presents a proof-of-concept on the use of HSI data to automatically detect human brain tumor tissue in pathological slides. The samples, consisting of hyperspectral cubes collected from 400 nm to 1000 nm, were acquired from ten different patients diagnosed with high-grade glioma. Based on the diagnosis provided by pathologists, a spectral library of normal and tumor tissues was created and processed using three different supervised classification algorithms. Results prove that HSI is a suitable technique to automatically detect high-grade tumors from pathological slides.
NASA Technical Reports Server (NTRS)
Perry, Charleen; Driessen, Cornelius; Pasian, Fabio
1989-01-01
The Uniform Low Dispersion Archive (ULDA) is a software system which, in one sitting, allows one to obtain copies on one's personal computer of those International Ultraviolet Explorer (IUE) low dispersion spectra that are of interest to the user. Overviews and use instructions are given for two programs, one to search for and select spectra, and the other to convert those spectra into a form suitable for the user's image processing system.
Urological diagnosis using clinical PACS
NASA Astrophysics Data System (ADS)
Mills, Stephen F.; Spetz, Kevin S.; Dwyer, Samuel J., III
1995-05-01
Urological diagnosis using fluoroscopy images has traditionally been performed using radiographic films. Images are generally acquired in conjunction with the application of a contrast agent, processed to create analog films, and inspected to ensure satisfactory image quality prior to being provided to a radiologist for reading. In the case of errors the entire process must be repeated. In addition, the radiologist must then often go to a particular reading room, possibly in a remote part of the healthcare facility, to read the images. The integration of digital fluoroscopy modalities with clinical PACS has the potential to significantly improve the urological diagnosis process by providing high-speed access to images at a variety of locations within a healthcare facility without costly film processing. The PACS additionally provides a cost-effective and reliable means of long-term storage and allows several medical users to simultaneously view the same images at different locations. The installation of a digital data interface between the existing clinically operational PACS at the University of Virginia Health Sciences Center and a digital urology fluoroscope is described. Preliminary user interviews that have been conducted to determine the clinical effectiveness of PACS workstations for urological diagnosis are discussed. The specific suitability of the workstation medium is discussed, as are overall advantages and disadvantages of the hardcopy and softcopy media in terms of efficiency, timeliness and cost. Throughput metrics and some specific parameters of gray-scale viewing stations and the expected system impacts resulting from the integration of a urology fluoroscope with PACS are also discussed.
NASA Astrophysics Data System (ADS)
Bachche, Shivaji; Oka, Koichi
2013-06-01
This paper presents the comparative study of various color space models to determine the suitable color space model for detection of green sweet peppers. The images were captured by using CCD cameras and infrared cameras and processed by using Halcon image processing software. The LED ring around the camera neck was used as an artificial lighting to enhance the feature parameters. For color images, CieLab, YIQ, YUV, HSI and HSV whereas for infrared images, grayscale color space models were selected for image processing. In case of color images, HSV color space model was found more significant with high percentage of green sweet pepper detection followed by HSI color space model as both provides information in terms of hue/lightness/chroma or hue/lightness/saturation which are often more relevant to discriminate the fruit from image at specific threshold value. The overlapped fruits or fruits covered by leaves can be detected in better way by using HSV color space model as the reflection feature from fruits had higher histogram than reflection feature from leaves. The IR 80 optical filter failed to distinguish fruits from images as filter blocks useful information on features. Computation of 3D coordinates of recognized green sweet peppers was also conducted in which Halcon image processing software provides location and orientation of the fruits accurately. The depth accuracy of Z axis was examined in which 500 to 600 mm distance between cameras and fruits was found significant to compute the depth distance precisely when distance between two cameras maintained to 100 mm.
Bode, Stefan; Murawski, Carsten; Laham, Simon M.
2018-01-01
A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/. PMID:29364985
Data processing from lobster eye type optics
NASA Astrophysics Data System (ADS)
Nentvich, Ondrej; Stehlikova, Veronika; Urban, Martin; Hudec, Rene; Sieger, Ladislav
2017-05-01
Wolter I optics are commonly used for imaging in X-Ray spectrum. This system uses two reflections, and at higher energies, this system is not so much efficient but has a very good optical resolution. Here is another type of optics Lobster Eye, which is using also two reflections for focusing rays in Schmidt's or Angel's arrangement. Here is also possible to use Lobster eye optics as two one dimensional independent optics. This paper describes advantages of one dimensional and two dimensional Lobster Eye optics in Schmidt's arrangement and its data processing - find out a number of sources in wide field of view. Two dimensional (2D) optics are suitable to detect the number of point X-ray sources and their magnitude, but it is necessary to expose for a long time because a 2D system has much lower transitivity, due to double reflection, compared to one dimensional (1D) optics. Not only for this reason, two 1D optics are better to use for lower magnitudes of sources. In this case, additional image processing is necessary to achieve a 2D image. This article describes of approach an image reconstruction and advantages of two 1D optics without significant losses of transitivity.
A new method for mapping multidimensional data to lower dimensions
NASA Technical Reports Server (NTRS)
Gowda, K. C.
1983-01-01
A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.
Wakes from submerged obstacles in an open channel flow
NASA Astrophysics Data System (ADS)
Smith, Geoffrey B.; Marmorino, George; Dong, Charles; Miller, W. D.; Mied, Richard
2015-11-01
Wakes from several submerged obstacles are examined via airborne remote sensing. The primary focus will be bathymetric features in the tidal Potomac river south of Washington, DC, but others may be included as well. In the Potomac the water depth is nominally 10 m with an obstacle height of 8 m, or 80% of the depth. Infrared imagery of the water surface reveals thermal structure suitable both for interpretation of the coherent structures and for estimating surface currents. A novel image processing technique is used to generate two independent scenes with a known time offset from a single overpass from the infrared imagery, suitable for velocity estimation. Color imagery of the suspended sediment also shows suitable texture. Both the `mountain wave' regime and a traditional turbulent wake are observed, depending on flow conditions. Results are validated with in-situ ADCP transects. A computational model is used to further interpret the results.
Trautz, Florian; Dreßler, Jan; Stassart, Ruth; Müller, Wolf; Ondruschka, Benjamin
2018-01-03
Immunohistochemistry (IHC) has become an integral part in forensic histopathology over the last decades. However, the underlying methods for IHC vary greatly depending on the institution, creating a lack of comparability. The aim of this study was to assess the optimal approach for different technical aspects of IHC, in order to improve and standardize this procedure. Therefore, qualitative results from manual and automatic IHC staining of brain samples were compared, as well as potential differences in suitability of common IHC glass slides. Further, possibilities of image digitalization and connected issues were investigated. In our study, automatic staining showed more consistent staining results, compared to manual staining procedures. Digitalization and digital post-processing facilitated direct analysis and analysis for reproducibility considerably. No differences were found for different commercially available microscopic glass slides regarding suitability of IHC brain researches, but a certain rate of tissue loss should be expected during the staining process.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-06-08
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-01-01
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383
Boix, Macarena; Cantó, Begoña
2013-04-01
Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.
The Sheath Transport Observer for the Redistribution of Mass (STORM) Image
NASA Technical Reports Server (NTRS)
Kuntz, Kip; Collier, Michael; Sibeck, David G.; Porter, F. Scott; Carter, J. A.; Cravens, Thomas; Omidi, N.; Robertson, Ina; Sembay, S.; Snowden, Steven L.
2008-01-01
All of the solar wind energy that powers magnetospheric processes passes through the magnetosheath and magnetopause. Global images of the magnetosheath and magnetopause boundary layers will resolve longstanding controversy surrounding fundamental phenomena that occur at the magnetopause and provide information needed to improve operational space weather models. Recent developments showing that soft X-rays (0.15-1 keV) result from high charge state solar wind ions undergoing charge exchange recombination through collisions with exospheric neutral atoms has led to the realization that soft X-ray imaging can provide global maps of the high-density shocked solar wind within the magnetosheath and cusps, regions lying between the lower density solar wind and magnetosphere. We discuss an instrument concept called the Sheath Transport Observer for the Redistribution of Mass (STORM), an X-ray imager suitable for simultaneously imaging the dayside magnetosheath, the magnetopause boundary layers, and the cusps.
The Sheath Transport Observer for the Redistribution of Mass (STORM) Imager
NASA Technical Reports Server (NTRS)
Collier, Michael R.; Sibeck, David G.; Porter, F. Scott; Burch, J.; Carter, J. A.; Cravens, Thomas; Kuntz, Kip; Omidi, N.; Read, A.; Robertson, Ina;
2010-01-01
All of the solar wind energy that powers magnetospheric processes passes through the magnetosheath and magnetopause. Global images of the magnetosheath and magnetopause boundary layers will resolve longstanding controversies surrounding fundamental phenomena that occur at the magnetopause and provide information needed to improve operational space weather models. Recent developments showing that soft X-rays (0.15-1 keV) result from high charge state solar wind ions undergoing charge exchange recombination through collisions with exospheric neutral atoms has led to the realization that soft X-ray imaging can provide global maps of the high-density shocked solar wind within the magnetosheath and cusps, regions lying between the lower density solar wind and magnetosphere. We discuss an instrument concept called the Sheath Transport Observer for the Redistribution of Mass (STORM), an X-ray imager suitable for simultaneously imaging the dayside magnetosheath, the magnetopause boundary layers, and the cusps.
NASA Astrophysics Data System (ADS)
Lu, Dajiang; He, Wenqi; Liao, Meihua; Peng, Xiang
2017-02-01
A new method to eliminate the security risk of the well-known interference-based optical cryptosystem is proposed. In this method, which is suitable for security authentication application, two phase-only masks are separately placed at different distances from the output plane, where a certification image (public image) can be obtained. To further increase the security and flexibility of this authentication system, we employ one more validation image (secret image), which can be observed at another output plane, for confirming the identity of the user. Only if the two correct masks are properly settled at their positions one could obtain two significant images. Besides, even if the legal users exchange their masks (keys), the authentication process will fail and the authentication results will not reveal any information. Numerical simulations are performed to demonstrate the validity and security of the proposed method.
Ultrasound image edge detection based on a novel multiplicative gradient and Canny operator.
Zheng, Yinfei; Zhou, Yali; Zhou, Hao; Gong, Xiaohong
2015-07-01
To achieve the fast and accurate segmentation of ultrasound image, a novel edge detection method for speckle noised ultrasound images was proposed, which was based on the traditional Canny and a novel multiplicative gradient operator. The proposed technique combines a new multiplicative gradient operator of non-Newtonian type with the traditional Canny operator to generate the initial edge map, which is subsequently optimized by the following edge tracing step. To verify the proposed method, we compared it with several other edge detection methods that had good robustness to noise, with experiments on the simulated and in vivo medical ultrasound image. Experimental results showed that the proposed algorithm has higher speed for real-time processing, and the edge detection accuracy could be 75% or more. Thus, the proposed method is very suitable for fast and accurate edge detection of medical ultrasound images. © The Author(s) 2014.
Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y
2014-07-08
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †
Kiku, Daisuke; Okutomi, Masatoshi
2017-01-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.
Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi
2017-12-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.
Imaging workflow and calibration for CT-guided time-domain fluorescence tomography
Tichauer, Kenneth M.; Holt, Robert W.; El-Ghussein, Fadi; Zhu, Qun; Dehghani, Hamid; Leblond, Frederic; Pogue, Brian W.
2011-01-01
In this study, several key optimization steps are outlined for a non-contact, time-correlated single photon counting small animal optical tomography system, using simultaneous collection of both fluorescence and transmittance data. The system is presented for time-domain image reconstruction in vivo, illustrating the sensitivity from single photon counting and the calibration steps needed to accurately process the data. In particular, laser time- and amplitude-referencing, detector and filter calibrations, and collection of a suitable instrument response function are all presented in the context of time-domain fluorescence tomography and a fully automated workflow is described. Preliminary phantom time-domain reconstructed images demonstrate the fidelity of the workflow for fluorescence tomography based on signal from multiple time gates. PMID:22076264
Low Temperature Performance of High-Speed Neural Network Circuits
NASA Technical Reports Server (NTRS)
Duong, T.; Tran, M.; Daud, T.; Thakoor, A.
1995-01-01
Artificial neural networks, derived from their biological counterparts, offer a new and enabling computing paradigm specially suitable for such tasks as image and signal processing with feature classification/object recognition, global optimization, and adaptive control. When implemented in fully parallel electronic hardware, it offers orders of magnitude speed advantage. Basic building blocks of the new architecture are the processing elements called neurons implemented as nonlinear operational amplifiers with sigmoidal transfer function, interconnected through weighted connections called synapses implemented using circuitry for weight storage and multiply functions either in an analog, digital, or hybrid scheme.
Exploring an optimal wavelet-based filter for cryo-ET imaging.
Huang, Xinrui; Li, Sha; Gao, Song
2018-02-07
Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.
System and method for optical fiber based image acquisition suitable for use in turbine engines
Baleine, Erwan; A V, Varun; Zombo, Paul J.; Varghese, Zubin
2017-05-16
A system and a method for image acquisition suitable for use in a turbine engine are disclosed. Light received from a field of view in an object plane is projected onto an image plane through an optical modulation device and is transferred through an image conduit to a sensor array. The sensor array generates a set of sampled image signals in a sensing basis based on light received from the image conduit. Finally, the sampled image signals are transformed from the sensing basis to a representation basis and a set of estimated image signals are generated therefrom. The estimated image signals are used for reconstructing an image and/or a motion-video of a region of interest within a turbine engine.
Li, Jian-fei; Li, Lin; Guo, Luo; Du, Shi-hong
2016-01-01
Urban landscape has the characteristics of spatial heterogeneity. Because the expansion process of urban constructive or ecological land has different resistance values, the land unit stimulates and promotes the expansion of ecological land with different intensity. To compare the effect of promoting and hindering functions in the same land unit, we firstly compared the minimum cumulative resistance value of promoting and hindering functions, and then looked for the balance of two landscape processes under the same standard. According to the ecology principle of minimum limit factor, taking the minimum cumulative resistance analysis method under two expansion processes as the evaluation method of urban land ecological suitability, this research took Zhuhai City as the study area to estimate urban ecological suitability by relative evaluation method with remote sensing image, field survey, and statistics data. With the support of ArcGIS, five types of indicators on landscape types, ecological value, soil erosion sensitivity, sensitivity of geological disasters, and ecological function were selected as input parameters in the minimum cumulative resistance model to compute urban ecological suitability. The results showed that the ecological suitability of the whole Zhuhai City was divided into five levels: constructive expansion prohibited zone (10.1%), constructive expansion restricted zone (32.9%), key construction zone (36.3%), priority development zone (2.3%), and basic cropland (18.4%). Ecological suitability of the central area of Zhuhai City was divided into four levels: constructive expansion prohibited zone (11.6%), constructive expansion restricted zone (25.6%), key construction zone (52.4%), priority development zone (10.4%). Finally, we put forward the sustainable development framework of Zhuhai City according to the research conclusion. On one hand, the government should strictly control the development of the urban center area. On the other hand, the secondary urban center area such as Junchang and Doumen need improve the public infrastructure to relieve the imbalance between eastern and western development in Zhuhai City.
Improving the detection of cocoa bean fermentation-related changes using image fusion
NASA Astrophysics Data System (ADS)
Ochoa, Daniel; Criollo, Ronald; Liao, Wenzhi; Cevallos-Cevallos, Juan; Castro, Rodrigo; Bayona, Oswaldo
2017-05-01
Complex chemical processes occur in during cocoa bean fermentation. To select well-fermented beans, experts take a sample of beans, cut them in half and visually check its color. Often farmers mix high and low quality beans therefore, chocolate properties are difficult to control. In this paper, we explore how close-range hyper- spectral (HS) data can be used to characterize the fermentation process of two types of cocoa beans (CCN51 and National). Our aim is to find spectral differences to allow bean classification. The main issue is to extract reliable spectral data as openings resulting from the loss of water during fermentation, can cover up to 40% of the bean surface. We exploit HS pan-sharpening techniques to increase the spatial resolution of HS images and filter out uneven surface regions. In particular, the guided filter PCA approach which has proved suitable to use high-resolution RGB data as guide image. Our preliminary results show that this pre-processing step improves the separability of classes corresponding to each fermentation stage compared to using the average spectrum of the bean surface.
Cameras and settings for optimal image capture from UAVs
NASA Astrophysics Data System (ADS)
Smith, Mike; O'Connor, James; James, Mike R.
2017-04-01
Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.
Fast range estimation based on active range-gated imaging for coastal surveillance
NASA Astrophysics Data System (ADS)
Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang
2012-11-01
Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.
Spectral Analysis and Experimental Modeling of Ice Accretion Roughness
NASA Technical Reports Server (NTRS)
Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.
1996-01-01
A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.
A Procedure for High Resolution Satellite Imagery Quality Assessment
Crespi, Mattia; De Vendictis, Laura
2009-01-01
Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites. PMID:22412312
Evolution of digital angiography systems.
Brigida, Raffaela; Misciasci, Teresa; Martarelli, Fabiola; Gangitano, Guido; Ottaviani, Pierfrancesco; Rollo, Massimo; Marano, Pasquale
2003-01-01
The innovations introduced by digital subtraction angiography in digital radiography are briefly illustrated with the description of its components and functioning. The pros and cons of digital subtraction angiography are analyzed in light of present and future imaging technologies. In particular, among advantages there are: automatic exposure, digital image subtraction, digital post-processing, high number of images per second, possible changes in density and contrast. Among disadvantages there are: small round field of view, geometric distortion at the image periphery, high sensitivity to patient movements, not very high spatial resolution. At present, flat panel detectors represent the most suitable substitutes for digital subtraction angiography, with the introduction of novel solutions for those artifacts which for years have hindered its diagnostic validity. The concept of temporal artifact, reset light and possible future evolutions of this technology that may afford both diagnostic and protectionist advantages, are analyzed.
MISTICA: Minimum Spanning Tree-based Coarse Image Alignment for Microscopy Image Sequences
Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T.
2016-01-01
Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to re-order the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries. PMID:26415193
MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.
Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T
2016-11-01
Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.
Live Cell Imaging and Measurements of Molecular Dynamics
Frigault, M.; Lacoste, J.; Swift, J.; Brown, C.
2010-01-01
w3-2 Live cell microscopy is becoming widespread across all fields of the life sciences, as well as, many areas of the physical sciences. In order to accurately obtain live cell microscopy data, the live specimens must be properly maintained on the imaging platform. In addition, the fluorescence light path must be optimized for efficient light transmission in order to reduce the intensity of excitation light impacting the living sample. With low incident light intensities the processes under study should not be altered due to phototoxic effects from the light allowing for the long term visualization of viable living samples. Aspects for maintaining a suitable environment for the living sample, minimizing incident light and maximizing detection efficiency will be presented for various fluorescence based live cell instruments. Raster Image Correlation Spectroscopy (RICS) is a technique that uses the intensity fluctuations within laser scanning confocal images, as well as the well characterized scanning dynamics of the laser beam, to extract the dynamics, concentrations and clustering of fluorescent molecules within the cell. In addition, two color cross-correlation RICS can be used to determine protein-protein interactions in living cells without the many technical difficulties encountered in FRET based measurements. RICS is an ideal live cell technique for measuring cellular dynamics because the potentially damaging high intensity laser bursts required for photobleaching recovery measurements are not required, rather low laser powers, suitable for imaging, can be used. The RICS theory will be presented along with examples of live cell applications.
Yamamoto, Yuta; Iriyama, Yasutoshi; Muto, Shunsuke
2016-04-01
In this article, we propose a smart image-analysis method suitable for extracting target features with hierarchical dimension from original data. The method was applied to three-dimensional volume data of an all-solid lithium-ion battery obtained by the automated sequential sample milling and imaging process using a focused ion beam/scanning electron microscope to investigate the spatial configuration of voids inside the battery. To automatically fully extract the shape and location of the voids, three types of filters were consecutively applied: a median blur filter to extract relatively larger voids, a morphological opening operation filter for small dot-shaped voids and a morphological closing operation filter for small voids with concave contrasts. Three data cubes separately processed by the above-mentioned filters were integrated by a union operation to the final unified volume data, which confirmed the correct extraction of the voids over the entire dimension contained in the original data. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Design of an integrated aerial image sensor
NASA Astrophysics Data System (ADS)
Xue, Jing; Spanos, Costas J.
2005-05-01
The subject of this paper is a novel integrated aerial image sensor (IAIS) system suitable for integration within the surface of an autonomous test wafer. The IAIS could be used as a lithography processing monitor, affording a "wafer's eye view" of the process, and therefore facilitating advanced process control and diagnostics without integrating (and dedicating) the sensor to the processing equipment. The IAIS is composed of an aperture mask and an array of photo-detectors. In order to retrieve nanometer scale resolution of the aerial image with a practical photo-detector pixel size, we propose a design of an aperture mask involving a series of spatial phase "moving" aperture groups. We demonstrate a design example aimed at the 65nm technology node through TEMPEST simulation. The optimized, key design parameters include an aperture width in the range of 30nm, aperture thickness in the range of 70nm, and offer a spatial resolution of about 5nm, all with comfortable fabrication tolerances. Our preliminary simulation work indicates the possibility of the IAIS being applied to the immersion lithography. A bench-top far-field experiment verifies that our approach of the spatial frequency down-shift through forming large Moire patterns is feasible.
Common hyperspectral image database design
NASA Astrophysics Data System (ADS)
Tian, Lixun; Liao, Ningfang; Chai, Ali
2009-11-01
This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.
Novel algorithm by low complexity filter on retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Rostampour, Samad
2011-10-01
This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained
Comparison of turbulence mitigation algorithms
NASA Astrophysics Data System (ADS)
Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric
2017-07-01
When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.
NASA Astrophysics Data System (ADS)
Molinari, Filippo; Acharya, Rajendra; Zeng, Guang; Suri, Jasjit S.
2011-03-01
The carotid intima-media thickness (IMT) is the most used marker for the progression of atherosclerosis and onset of the cardiovascular diseases. Computer-aided measurements improve accuracy, but usually require user interaction. In this paper we characterized a new and completely automated technique for carotid segmentation and IMT measurement based on the merits of two previously developed techniques. We used an integrated approach of intelligent image feature extraction and line fitting for automatically locating the carotid artery in the image frame, followed by wall interfaces extraction based on Gaussian edge operator. We called our system - CARES. We validated the CARES on a multi-institutional database of 300 carotid ultrasound images. IMT measurement bias was 0.032 +/- 0.141 mm, better than other automated techniques and comparable to that of user-driven methodologies. Our novel approach of CARES processed 96% of the images leading to the figure of merit to be 95.7%. CARES ensured complete automation and high accuracy in IMT measurement; hence it could be a suitable clinical tool for processing of large datasets in multicenter studies involving atherosclerosis.pre-
Steganalysis feature improvement using expectation maximization
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.
NASA Astrophysics Data System (ADS)
Steinman, Joe; Koletar, Margaret; Stefanovic, Bojana; Sled, John G.
2016-03-01
This study evaluates 2-Photon fluorescence microscopy of in vivo and ex vivo cleared samples for visualizing cortical vasculature. Four mice brains were imaged with in vivo 2PFM. Mice were then perfused with a FITC gel and cleared in fructose. The same regions imaged in vivo were imaged ex vivo. Vessels were segmented automatically in both images using an in-house developed algorithm that accounts for the anisotropic and spatially varying PSF ex vivo. Through non-linear warping, the ex vivo image and tracing were aligned to the in vivo image. The corresponding vessels were identified through a local search algorithm. This enabled comparison of identical vessels in vivo/ex vivo. A similar process was conducted on the in vivo tracing to determine the percentage of vessels perfused. Of all the vessels identified over the four brains in vivo, 98% were present ex vivo. There was a trend towards reduced vessel diameter ex vivo by 12.7%, and the shrinkage varied between specimens (0% to 26%). Large diameter surface vessels, through a process termed 'shadowing', attenuated in vivo signal from deeper cortical vessels by 40% at 300 μm below the cortical surface, which does not occur ex vivo. In summary, though there is a mean diameter shrinkage ex vivo, ex vivo imaging has a reduced shadowing artifact. Additionally, since imaging depths are only limited by the working distance of the microscope objective, ex vivo imaging is more suitable for imaging large portions of the brain.
NASA Astrophysics Data System (ADS)
Kwee, Edward; Peterson, Alexander; Stinson, Jeffrey; Halter, Michael; Yu, Liya; Majurski, Michael; Chalfoun, Joe; Bajcsy, Peter; Elliott, John
2018-02-01
Induced pluripotent stem cells (iPSCs) are reprogrammed cells that can have heterogeneous biological potential. Quality assurance metrics of reprogrammed iPSCs will be critical to ensure reliable use in cell therapies and personalized diagnostic tests. We present a quantitative phase imaging (QPI) workflow which includes acquisition, processing, and stitching multiple adjacent image tiles across a large field of view (LFOV) of a culture vessel. Low magnification image tiles (10x) were acquired with a Phasics SID4BIO camera on a Zeiss microscope. iPSC cultures were maintained using a custom stage incubator on an automated stage. We implement an image acquisition strategy that compensates for non-flat illumination wavefronts to enable imaging of an entire well plate, including the meniscus region normally obscured in Zernike phase contrast imaging. Polynomial fitting and background mode correction was implemented to enable comparability and stitching between multiple tiles. LFOV imaging of reference materials indicated that image acquisition and processing strategies did not affect quantitative phase measurements across the LFOV. Analysis of iPSC colony images demonstrated mass doubling time was significantly different than area doubling time. These measurements were benchmarked with prototype microsphere beads and etched-glass gratings with specified spatial dimensions designed to be QPI reference materials with optical pathlength shifts suitable for cell microscopy. This QPI workflow and the use of reference materials can provide non-destructive traceable imaging method for novel iPSC heterogeneity characterization.
Panorama imaging for image-to-physical registration of narrow drill holes inside spongy bones
NASA Astrophysics Data System (ADS)
Bergmeier, Jan; Fast, Jacob Friedemann; Ortmaier, Tobias; Kahrs, Lüder Alexander
2017-03-01
Image-to-physical registration based on volumetric data like computed tomography on the one side and intraoperative endoscopic images on the other side is an important method for various surgical applications. In this contribution, we present methods to generate panoramic views from endoscopic recordings for image-to-physical registration of narrow drill holes inside spongy bone. One core application is the registration of drill poses inside the mastoid during minimally invasive cochlear implantations. Besides the development of image processing software for registration, investigations are performed on a miniaturized optical system, achieving 360° radial imaging with one shot by extending a conventional, small, rigid, rod lens endoscope. A reflective cone geometry is used to deflect radially incoming light rays into the endoscope optics. Therefore, a cone mirror is mounted in front of a conventional 0° endoscope. Furthermore, panoramic images of inner drill hole surfaces in artificial bone material are created. Prior to drilling, cone beam computed tomography data is acquired from this artificial bone and simulated endoscopic views are generated from this data. A qualitative and quantitative image comparison of resulting views in terms of image-to-image registration is performed. First results show that downsizing of panoramic optics to a diameter of 3mm is possible. Conventional rigid rod lens endoscopes can be extended to produce suitable panoramic one-shot image data. Using unrolling and stitching methods, images of the inner drill hole surface similar to computed tomography image data of the same surface were created. Registration is performed on ten perturbations of the search space and results in target registration errors of (0:487 +/- 0:438)mm at the entry point and (0:957 +/- 0:948)mm at the exit as well as an angular error of (1:763 +/- 1:536)°. The results show suitability of this image data for image-to-image registration. Analysis of the error components in different directions reveals a strong influence of the pattern structure, meaning higher diversity results into smaller errors.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2018-03-01
The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference array) and fragments with diferent dimensions for clustering is carried out. The experiments, using the software environment Mathcad showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. The experimental results show that such models can be successfully used for auto- and hetero-associative recognition. Also they can be used to explain some mechanisms, known as "the reinforcementinhibition concept". Also we demonstrate a real model experiments, which confirm that the nonlinear processing by equivalent function allow to determine the neuron-winners and customize the weight matrix. At the end of the report, we will show how to use the obtained results and to propose new more efficient hardware architecture of SL_EC_RMNS based on matrix-tensor multipliers. Also we estimate the parameters and performance of such architectures.
NASA Astrophysics Data System (ADS)
Luo, Jiasai; Guo, Yongcai; Wang, Xin
2018-06-01
This paper puts forward a novel method for fabrication of sandwich-structured BCE using a detachable micro-hole array (MHA) prepared by 3D printing. Compared with most traditional methods, 3D printing enables effective implementation of direct micro-fabrication for curved BCE without the pattern transfer and substrate reshaping process. This 3D fabrication method allows rapid fabrication of the curved BCE and automatic assembly of the detachable MHA using a custom-built mold under negative pressure. The formation of a multi-focusing micro-lens array (MLA) was realized by adjusting the parameters of the curved detachable MHA. The imaging performance was effectively enhanced by the sandwich structure that consist of the multi-focusing MLA, the outer detachable MHA and the inner solidified MHA. This method is suitable for mass production due to its advantages as a time-saving, cost-effective and simple process. Optical design software was used to analyze the optical properties, and an imaging simulation was performed.
A Hitchhiker's Guide to Functional Magnetic Resonance Imaging
Soares, José M.; Magalhães, Ricardo; Moreira, Pedro S.; Sousa, Alexandre; Ganz, Edward; Sampaio, Adriana; Alves, Victor; Marques, Paulo; Sousa, Nuno
2016-01-01
Functional Magnetic Resonance Imaging (fMRI) studies have become increasingly popular both with clinicians and researchers as they are capable of providing unique insights into brain functions. However, multiple technical considerations (ranging from specifics of paradigm design to imaging artifacts, complex protocol definition, and multitude of processing and methods of analysis, as well as intrinsic methodological limitations) must be considered and addressed in order to optimize fMRI analysis and to arrive at the most accurate and grounded interpretation of the data. In practice, the researcher/clinician must choose, from many available options, the most suitable software tool for each stage of the fMRI analysis pipeline. Herein we provide a straightforward guide designed to address, for each of the major stages, the techniques, and tools involved in the process. We have developed this guide both to help those new to the technique to overcome the most critical difficulties in its use, as well as to serve as a resource for the neuroimaging community. PMID:27891073
NASA Astrophysics Data System (ADS)
Hruszkewycz, Stephan; Cha, Wonsuk; Ulvestad, Andrew; Fuoss, Paul; Heremans, F. Joseph; Harder, Ross; Andrich, Paolo; Anderson, Christopher; Awschalom, David
The nitrogen-vacancy center in diamond has attracted considerable attention for nanoscale sensing due to unique optical and spin properties. Many of these applications require diamond nanoparticles which contain large amounts of residual strain due to the detonation or milling process used in their fabrication. Here, we present experimental, in-situ observations of changes in morphology and internal strain state of commercial nanodiamonds during high-temperature annealing using Bragg coherent diffraction imaging to reconstruct a strain-sensitive 3D image of individual sub-micron-sized crystals. We find minimal structural changes to the nanodiamonds at temperatures less than 650 C, and that at higher temperatures up to 750 C, the diamond-structured volume fraction of nanocrystals tend to shrink. The degree of internal lattice distortions within nanodiamond particles also decreases during the anneal. Our findings potentially enable the design of efficient processing of commercial nanodiamonds into viable materials suitable for device design. We acknowledge support from U.S. DOE, Office of Science, BES, MSE.
Pu, Yuan-Yuan; Sun, Da-Wen
2015-12-01
Mango slices were dried by microwave-vacuum drying using a domestic microwave oven equipped with a vacuum desiccator inside. Two lab-scale hyperspectral imaging (HSI) systems were employed for moisture prediction. The Page and the Two-term thin-layer drying models were suitable to describe the current drying process with a fitting goodness of R(2)=0.978. Partial least square (PLS) was applied to correlate the mean spectrum of each slice and reference moisture content. With three waveband selection strategies, optimal wavebands corresponding to moisture prediction were identified. The best model RC-PLS-2 (Rp(2)=0.972 and RMSEP=4.611%) was implemented into the moisture visualization procedure. Moisture distribution map clearly showed that the moisture content in the central part of the mango slices was lower than that of other parts. The present study demonstrated that hyperspectral imaging was a useful tool for non-destructively and rapidly measuring and visualizing the moisture content during drying process. Copyright © 2015 Elsevier Ltd. All rights reserved.
An effective hand vein feature extraction method.
Li, Haigang; Zhang, Qian; Li, Chengdong
2015-01-01
As a new authentication method developed years ago, vein recognition technology features the unique advantage of bioassay. This paper studies the specific procedure for the extraction of hand back vein characteristics. There are different positions used in the collecting process, so that a suitable intravenous regional orientation method is put forward, allowing the positioning area to be the same for all hand positions. In addition, to eliminate the pseudo vein area, the valley regional shape extraction operator can be improved and combined with multiple segmentation algorithms. The images should be segmented step by step, making the vein texture to appear clear and accurate. Lastly, the segmented images should be filtered, eroded, and refined. This process helps to filter the most of the pseudo vein information. Finally, a clear vein skeleton diagram is obtained, demonstrating the effectiveness of the algorithm. This paper presents a hand back vein region location method. This makes it possible to rotate and correct the image by working out the inclination degree of contour at the side of hand back.
Podwysocki, Melvin H.; Power, Marty S.; Salisbury, Jack; Jones, O.D.
1984-01-01
Landsat-4 Thematic Mapper (TM) data of southern Nevada collected under conditions of low-angle solar illumination were digitally processed to identify hydroxyl-bearing minerals commonly associated with hydrothermal alteration in volcanic terrains. Digital masking procedures were used to exclude shadow areas and vegetation and thus to produce a CRC image suitable for testing the new TM bands as a means to map hydrothermally altered rocks. Field examination of a masked CRC image revealed that several different types of altered rocks displayed hues associated with spectral characteristics common to hydroxyl-bearing minerals. Several types of unaltered rocks also displayed similar hues.
Weiss, Lucien E; Naor, Tal; Shechtman, Yoav
2018-06-19
The structural organization and dynamics of DNA are known to be of paramount importance in countless cellular processes, but capturing these events poses a unique challenge. Fluorescence microscopy is well suited for these live-cell investigations, but requires attaching fluorescent labels to the species under investigation. Over the past several decades, a suite of techniques have been developed for labeling and imaging DNA, each with various advantages and drawbacks. Here, we provide an overview of the labeling and imaging tools currently available for visualizing DNA in live cells, and discuss their suitability for various applications. © 2018 The Author(s). Published by Portland Press Limited on behalf of the Biochemical Society.
Evaluation of radiometric and geometric characteristics of LANDSAT-D imaging system
NASA Technical Reports Server (NTRS)
Salisbury, J. W.; Podwysocki, M. H.; Bender, L. U.; Rowan, L. C. (Principal Investigator)
1983-01-01
With vegetation masked and noise sources eliminated or minimized, different carbonate facies could be discriminated in a south Florida scene. Laboratory spectra of grab samples indicate that a 20% change in depth of the carbonate absorption band was detected despite the effects of atmospheric absorption. Both bright and dark hydrothermally altered volcanic rocks can be discriminated from their unaltered equivalents. A previously unrecognized altered area was identified on the basis of the TM images. The ability to map desert varnish in semi-arid terrains has economic significance as it defines areas that are less susceptible desert erosional process and suitable for construction development.
Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong
2013-01-07
Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.
Jin, Shuo; Li, Dengwang; Yin, Yong
2013-01-01
Accurate registration of 18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from 18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381
NASA Astrophysics Data System (ADS)
Märk, Julia; Ruschke, Karen; Dortay, Hakan; Schreiber, Isabelle; Sass, Andrea; Qazi, Taimoor; Pumberger, Matthias; Laufer, Jan
2014-03-01
The capability to image stem cells in vivo in small animal models over extended periods of time is important to furthering our understanding of the processes involved in tissue regeneration. Photoacoustic imaging is suited to this application as it can provide high resolution (tens of microns) absorption-based images of superficial tissues (cm depths). However, stem cells are rare, highly migratory, and can divide into more specialised cells. Genetic labelling strategies are therefore advantageous for their visualisation. In this study, methods for the transfection and viral transduction of mesenchymal stem cells with reporter genes for the co-expression of tyrosinase and a fluorescent protein (mCherry). Initial photoacoustic imaging experiments of tyrosinase expressing cells in small animal models of tissue regeneration were also conducted. Lentiviral transduction methods were shown to result in stable expression of tyrosinase and mCherry in mesenchymal stem cells. The results suggest that photoacoustic imaging using reporter genes is suitable for the study of stem cell driven tissue regeneration in small animals.
Parallel Processing Systems for Passive Ranging During Helicopter Flight
NASA Technical Reports Server (NTRS)
Sridhar, Bavavar; Suorsa, Raymond E.; Showman, Robert D. (Technical Monitor)
1994-01-01
The complexity of rotorcraft missions involving operations close to the ground result in high pilot workload. In order to allow a pilot time to perform mission-oriented tasks, sensor-aiding and automation of some of the guidance and control functions are highly desirable. Images from an electro-optical sensor provide a covert way of detecting objects in the flight path of a low-flying helicopter. Passive ranging consists of processing a sequence of images using techniques based on optical low computation and recursive estimation. The passive ranging algorithm has to extract obstacle information from imagery at rates varying from five to thirty or more frames per second depending on the helicopter speed. We have implemented and tested the passive ranging algorithm off-line using helicopter-collected images. However, the real-time data and computation requirements of the algorithm are beyond the capability of any off-the-shelf microprocessor or digital signal processor. This paper describes the computational requirements of the algorithm and uses parallel processing technology to meet these requirements. Various issues in the selection of a parallel processing architecture are discussed and four different computer architectures are evaluated regarding their suitability to process the algorithm in real-time. Based on this evaluation, we conclude that real-time passive ranging is a realistic goal and can be achieved with a short time.
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire
2017-12-01
Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
Artificial retina model for the retinally blind based on wavelet transform
NASA Astrophysics Data System (ADS)
Zeng, Yan-an; Song, Xin-qiang; Jiang, Fa-gang; Chang, Da-ding
2007-01-01
Artificial retina is aimed for the stimulation of remained retinal neurons in the patients with degenerated photoreceptors. Microelectrode arrays have been developed for this as a part of stimulator. Design such microelectrode arrays first requires a suitable mathematical method for human retinal information processing. In this paper, a flexible and adjustable human visual information extracting model is presented, which is based on the wavelet transform. With the flexible of wavelet transform to image information processing and the consistent to human visual information extracting, wavelet transform theory is applied to the artificial retina model for the retinally blind. The response of the model to synthetic image is shown. The simulated experiment demonstrates that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an artificial retina.
The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images
NASA Astrophysics Data System (ADS)
Berriman, G. Bruce; Good, J. C.
2017-05-01
The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. This code is freely available and has been widely used in the astronomy and IT communities for research, product generation, and for developing next-generation cyber-infrastructure. Recently, it has begun finding applicability in the field of visualization. This development has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. The toolkit it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics, but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and downsampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials allow readers to reproduce and extend all the visualizations presented in this paper.
Visidep (TM): A Three-Dimensional Imaging System For The Unaided Eye
NASA Astrophysics Data System (ADS)
McLaurin, A. Porter; Jones, Edwin R.; Cathey, LeConte
1984-05-01
The VISIDEP process for creating images in three dimensions on flat screens is suitable for photographic, electrographic and computer generated imaging systems. Procedures for generating these images vary from medium to medium due to the specific requirements of each technology. Imaging requirements for photographic and electrographic media are more directly tied to the hardware than are computer based systems. Applications of these technologies are not limited to entertainment, but have implications for training, interactive computer/video systems, medical imaging, and inspection equipment. Through minor modification the system can provide three-dimensional images with accurately measureable relationships for robotics and adds this factor for future developments in artificial intelligence. In almost any area requiring image analysis or critical review, VISIDEP provides the added advantage of three-dimensionality. All of this is readily accomplished without aids to the human eye. The system can be viewed in full color, false-color infra-red, and monochromatic modalities from any angle and is also viewable with a single eye. Thus, the potential of application for this developing system is extensive and covers the broad spectrum of human endeavor from entertainment to scientific study.
Effect of using different cover image quality to obtain robust selective embedding in steganography
NASA Astrophysics Data System (ADS)
Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer
2014-05-01
One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.
Visualizing individual microtubules by bright field microscopy
NASA Astrophysics Data System (ADS)
Gutiérrez-Medina, Braulio; Block, Steven M.
2010-11-01
Microtubules are slender (˜25 nm diameter), filamentous polymers involved in cellular structure and organization. Individual microtubules have been visualized via fluorescence imaging of dye-labeled tubulin subunits and by video-enhanced, differential interference-contrast microscopy of unlabeled polymers using sensitive CCD cameras. We demonstrate the imaging of unstained microtubules using a microscope with conventional bright field optics in conjunction with a webcam-type camera and a light-emitting diode illuminator. The light scattered by microtubules is image-processed to remove the background, reduce noise, and enhance contrast. The setup is based on a commercial microscope with a minimal set of inexpensive components, suitable for implementation in a student laboratory. We show how this approach can be used in a demonstration motility assay, tracking the gliding motions of microtubules driven by the motor protein kinesin.
A simplified focusing and astigmatism correction method for a scanning electron microscope
NASA Astrophysics Data System (ADS)
Lu, Yihua; Zhang, Xianmin; Li, Hai
2018-01-01
Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.
Multispectral Wavefronts Retrieval in Digital Holographic Three-Dimensional Imaging Spectrometry
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2010-04-01
This paper deals with a recently developed passive interferometric technique for retrieving a set of spectral components of wavefronts that are propagating from a spatially incoherent, polychromatic object. The technique is based on measurement of 5-D spatial coherence function using a suitably designed interferometer. By applying signal processing, including aperture synthesis and spectral decomposition, one may obtains a set of wavefronts of different spectral bands. Since each wavefront is equivalent to the complex Fresnel hologram at a particular spectrum of the polychromatic object, application of the conventional Fresnel transform yields 3-D image of different spectrum. Thus, this technique of multispectral wavefronts retrieval provides a new type of 3-D imaging spectrometry based on a fully passive interferometry. Experimental results are also shown to demonstrate the validity of the method.
Emulsions for pulsed holography: new and improved processing schemes
NASA Astrophysics Data System (ADS)
Rodin, Alexey M.; Taylor, Rob
2003-05-01
Recent improvements in the processing of commercially available holographic recording materials for pulsed holography are reviewed. Harmonics of pulsed Nd:YLF/Nd:Phosphate Glass, Nd:YLF, Nd:YAG laser's, and the fundamental wavelength of a pulsed Ruby laser were used as radiation sources for the recording of transmission and reflection holography gratings. It is shown that ultra-fine grain size materials such as PFG-03C and Ultimate-15 can be successfully applied for small and medium format pulsed holography applications. These small grain size emulsions are especially important in the areas of artistic archival portraiture and contact Denisyuk micro-holography of living objects, where noiseless image reconstruction is of a primary concern. It suggests that HOE's, such as full-color image projection screens, may be successfully recorded on PFG-03C holographic emulsions using a pulsed RGB laser. A range of commercial RGB pulsed lasers suitable for these applications are introduced. Visible wavelengths currently produced from these lasers covers the spectrum of 440 - 660nm. Latest developments of a full range of pulsed holographic camera systems manufactured by GEOLA that are suitable for medium and large format portraiture, medical imaging, museum artifact archival recording, and other types of holography are also reviewed with particular reference to new integrated digital mastering features. Finally, the initial commercial production of a new photopolymer film with a sensitivity range of 625-680nm is introduced. Initial CW exposure energies at 633nm were 30 - 50mJ/cm2; with diffraction efficiencies of 75 - 80% observed with this new material.
Multispectral imaging system based on laser-induced fluorescence for security applications
NASA Astrophysics Data System (ADS)
Caneve, L.; Colao, F.; Del Franco, M.; Palucci, A.; Pistilli, M.; Spizzichino, V.
2016-10-01
The development of portable sensors for fast screening of crime scenes is required to reduce the number of evidences useful to be collected, optimizing time and resources. Laser based spectroscopic techniques are good candidates to this scope due to their capability to operate in field, in remote and rapid way. In this work, the prototype of a multispectral imaging LIF (Laser Induced Fluorescence) system able to detect evidence of different materials on large very crowded and confusing areas at distances up to some tens of meters will be presented. Data collected as both 2D fluorescence images and LIF spectra are suitable to the identification and the localization of the materials of interest. A reduced scan time, preserving at the same time the accuracy of the results, has been taken into account as a main requirement in the system design. An excimer laser with high energy and repetition rate coupled to a gated high sensitivity ICCD assures very good performances for this purpose. Effort has been devoted to speed up the data processing. The system has been tested in outdoor and indoor real scenarios and some results will be reported. Evidence of the plastics polypropylene (PP) and polyethilene (PE) and polyester have been identified and their localization on the examined scenes has been highlighted through the data processing. By suitable emission bands, the instrument can be used for the rapid detection of other material classes (i.e. textiles, woods, varnishes). The activities of this work have been supported by the EU-FP7 FORLAB project (Forensic Laboratory for in-situ evidence analysis in a post blast scenario).
Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco
2016-01-01
Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643
Kovalenko, Lyudmyla Y; Chaumon, Maximilien; Busch, Niko A
2012-07-01
Semantic processing of verbal and visual stimuli has been investigated in semantic violation or semantic priming paradigms in which a stimulus is either related or unrelated to a previously established semantic context. A hallmark of semantic priming is the N400 event-related potential (ERP)--a deflection of the ERP that is more negative for semantically unrelated target stimuli. The majority of studies investigating the N400 and semantic integration have used verbal material (words or sentences), and standardized stimulus sets with norms for semantic relatedness have been published for verbal but not for visual material. However, semantic processing of visual objects (as opposed to words) is an important issue in research on visual cognition. In this study, we present a set of 800 pairs of semantically related and unrelated visual objects. The images were rated for semantic relatedness by a sample of 132 participants. Furthermore, we analyzed low-level image properties and matched the two semantic categories according to these features. An ERP study confirmed the suitability of this image set for evoking a robust N400 effect of semantic integration. Additionally, using a general linear modeling approach of single-trial data, we also demonstrate that low-level visual image properties and semantic relatedness are in fact only minimally overlapping. The image set is available for download from the authors' website. We expect that the image set will facilitate studies investigating mechanisms of semantic and contextual processing of visual stimuli.
On-board landmark navigation and attitude reference parallel processor system
NASA Technical Reports Server (NTRS)
Gilbert, L. E.; Mahajan, D. T.
1978-01-01
An approach to autonomous navigation and attitude reference for earth observing spacecraft is described along with the landmark identification technique based on a sequential similarity detection algorithm (SSDA). Laboratory experiments undertaken to determine if better than one pixel accuracy in registration can be achieved consistent with onboard processor timing and capacity constraints are included. The SSDA is implemented using a multi-microprocessor system including synchronization logic and chip library. The data is processed in parallel stages, effectively reducing the time to match the small known image within a larger image as seen by the onboard image system. Shared memory is incorporated in the system to help communicate intermediate results among microprocessors. The functions include finding mean values and summation of absolute differences over the image search area. The hardware is a low power, compact unit suitable to onboard application with the flexibility to provide for different parameters depending upon the environment.
Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A
2017-07-25
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.
A RESTful image gateway for multiple medical image repositories.
Valente, Frederico; Viana-Ferreira, Carlos; Costa, Carlos; Oliveira, José Luis
2012-05-01
Mobile technologies are increasingly important components in telemedicine systems and are becoming powerful decision support tools. Universal access to data may already be achieved by resorting to the latest generation of tablet devices and smartphones. However, the protocols employed for communicating with image repositories are not suited to exchange data with mobile devices. In this paper, we present an extensible approach to solving the problem of querying and delivering data in a format that is suitable for the bandwidth and graphic capacities of mobile devices. We describe a three-tiered component-based gateway that acts as an intermediary between medical applications and a number of Picture Archiving and Communication Systems (PACS). The interface with the gateway is accomplished using Hypertext Transfer Protocol (HTTP) requests following a Representational State Transfer (REST) methodology, which relieves developers from dealing with complex medical imaging protocols and allows the processing of data on the server side.
Spinosa, Emanuele; Roberts, David A.
2017-01-01
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553
Alonso-Caneiro, David; Sampson, Danuta M.; Chew, Avenell L.; Collins, Michael J.; Chen, Fred K.
2018-01-01
Adaptive optics flood illumination ophthalmoscopy (AO-FIO) allows imaging of the cone photoreceptor in the living human retina. However, clinical interpretation of the AO-FIO image remains challenging due to suboptimal quality arising from residual uncorrected wavefront aberrations and rapid eye motion. An objective method of assessing image quality is necessary to determine whether an AO-FIO image is suitable for grading and diagnostic purpose. In this work, we explore the use of focus measure operators as a surrogate measure of AO-FIO image quality. A set of operators are tested on data sets acquired at different focal depths and different retinal locations from healthy volunteers. Our results demonstrate differences in focus measure operator performance in quantifying AO-FIO image quality. Further, we discuss the potential application of the selected focus operators in (i) selection of the best quality AO-FIO image from a series of images collected at the same retinal location and (ii) assessment of longitudinal changes in the diseased retina. Focus function could be incorporated into real-time AO-FIO image processing and provide an initial automated quality assessment during image acquisition or reading center grading. PMID:29552404
NASA Astrophysics Data System (ADS)
Du, Hongbo; Al-Jubouri, Hanan; Sellahewa, Harin
2014-05-01
Content-based image retrieval is an automatic process of retrieving images according to image visual contents instead of textual annotations. It has many areas of application from automatic image annotation and archive, image classification and categorization to homeland security and law enforcement. The key issues affecting the performance of such retrieval systems include sensible image features that can effectively capture the right amount of visual contents and suitable similarity measures to find similar and relevant images ranked in a meaningful order. Many different approaches, methods and techniques have been developed as a result of very intensive research in the past two decades. Among many existing approaches, is a cluster-based approach where clustering methods are used to group local feature descriptors into homogeneous regions, and search is conducted by comparing the regions of the query image against those of the stored images. This paper serves as a review of works in this area. The paper will first summarize the existing work reported in the literature and then present the authors' own investigations in this field. The paper intends to highlight not only achievements made by recent research but also challenges and difficulties still remaining in this area.
Alonso-Caneiro, David; Sampson, Danuta M; Chew, Avenell L; Collins, Michael J; Chen, Fred K
2018-02-01
Adaptive optics flood illumination ophthalmoscopy (AO-FIO) allows imaging of the cone photoreceptor in the living human retina. However, clinical interpretation of the AO-FIO image remains challenging due to suboptimal quality arising from residual uncorrected wavefront aberrations and rapid eye motion. An objective method of assessing image quality is necessary to determine whether an AO-FIO image is suitable for grading and diagnostic purpose. In this work, we explore the use of focus measure operators as a surrogate measure of AO-FIO image quality. A set of operators are tested on data sets acquired at different focal depths and different retinal locations from healthy volunteers. Our results demonstrate differences in focus measure operator performance in quantifying AO-FIO image quality. Further, we discuss the potential application of the selected focus operators in (i) selection of the best quality AO-FIO image from a series of images collected at the same retinal location and (ii) assessment of longitudinal changes in the diseased retina. Focus function could be incorporated into real-time AO-FIO image processing and provide an initial automated quality assessment during image acquisition or reading center grading.
Rapid Decimation for Direct Volume Rendering
NASA Technical Reports Server (NTRS)
Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane
1997-01-01
An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.
Impervious surfaces mapping using high resolution satellite imagery
NASA Astrophysics Data System (ADS)
Shirmeen, Tahmina
In recent years, impervious surfaces have emerged not only as an indicator of the degree of urbanization, but also as an indicator of environmental quality. As impervious surface area increases, storm water runoff increases in velocity, quantity, temperature and pollution load. Any of these attributes can contribute to the degradation of natural hydrology and water quality. Various image processing techniques have been used to identify the impervious surfaces, however, most of the existing impervious surface mapping tools used moderate resolution imagery. In this project, the potential of standard image processing techniques to generate impervious surface data for change detection analysis using high-resolution satellite imagery was evaluated. The city of Oxford, MS was selected as the study site for this project. Standard image processing techniques, including Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA), a combination of NDVI and PCA, and image classification algorithms, were used to generate impervious surfaces from multispectral IKONOS and QuickBird imagery acquired in both leaf-on and leaf-off conditions. Accuracy assessments were performed, using truth data generated by manual classification, with Kappa statistics and Zonal statistics to select the most appropriate image processing techniques for impervious surface mapping. The performance of selected image processing techniques was enhanced by incorporating Soil Brightness Index (SBI) and Greenness Index (GI) derived from Tasseled Cap Transformed (TCT) IKONOS and QuickBird imagery. A time series of impervious surfaces for the time frame between 2001 and 2007 was made using the refined image processing techniques to analyze the changes in IS in Oxford. It was found that NDVI and the combined NDVI--PCA methods are the most suitable image processing techniques for mapping impervious surfaces in leaf-off and leaf-on conditions respectively, using high resolution multispectral imagery. It was also found that IS data generated by these techniques can be refined by removing the conflicting dry soil patches using SBI and GI obtained from TCT of the same imagery used for IS data generation. The change detection analysis of the IS time series shows that Oxford experienced the major changes in IS from the year 2001 to 2004 and 2006 to 2007.
"Proximal Sensing" capabilities for snow cover monitoring
NASA Astrophysics Data System (ADS)
Valt, Mauro; Salvatori, Rosamaria; Plini, Paolo; Salzano, Roberto; Giusti, Marco; Montagnoli, Mauro; Sigismondi, Daniele; Cagnati, Anselmo
2013-04-01
The seasonal snow cover represents one of the most important land cover class in relation to environmental studies in mountain areas, especially considering its variation during time. Snow cover and its extension play a relevant role for the studies on the atmospheric dynamics and the evolution of climate. It is also important for the analysis and management of water resources and for the management of touristic activities in mountain areas. Recently, webcam images collected at daily or even hourly intervals are being used as tools to observe the snow covered areas; those images, properly processed, can be considered a very important environmental data source. Images captured by digital cameras become a useful tool at local scale providing images even when the cloud coverage makes impossible the observation by satellite sensors. When suitably processed these images can be used for scientific purposes, having a good resolution (at least 800x600x16 million colours) and a very good sampling frequency (hourly images taken through the whole year). Once stored in databases, those images represent therefore an important source of information for the study of recent climatic changes, to evaluate the available water resources and to analyse the daily surface evolution of the snow cover. The Snow-noSnow software has been specifically designed to automatically detect the extension of snow cover collected from webcam images with a very limited human intervention. The software was tested on images collected on Alps (ARPAV webcam network) and on Apennine in a pilot station properly equipped for this project by CNR-IIA. The results obtained through the use of Snow-noSnow are comparable to the one achieved by photo-interpretation and could be considered as better as the ones obtained using the image segmentation routine implemented into image processing commercial softwares. Additionally, Snow-noSnow operates in a semi-automatic way and has a reduced processing time. The analysis of this kind of images could represent an useful element to support the interpretation of remote sensing images, especially those provided by high spatial resolution sensors. Keywords: snow cover monitoring, digital images, software, Alps, Apennines.
Efficient Use of Video for 3d Modelling of Cultural Heritage Objects
NASA Astrophysics Data System (ADS)
Alsadik, B.; Gerke, M.; Vosselman, G.
2015-03-01
Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.
Quantitative evaluation of phase processing approaches in susceptibility weighted imaging
NASA Astrophysics Data System (ADS)
Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.
2012-03-01
Susceptibility weighted imaging (SWI) takes advantage of the local variation in susceptibility between different tissues to enable highly detailed visualization of the cerebral venous system and sensitive detection of intracranial hemorrhages. Thus, it has been increasingly used in magnetic resonance imaging studies of traumatic brain injury as well as other intracranial pathologies. In SWI, magnitude information is combined with phase information to enhance the susceptibility induced image contrast. Because of global susceptibility variations across the image, the rate of phase accumulation varies widely across the image resulting in phase wrapping artifacts that interfere with the local assessment of phase variation. Homodyne filtering is a common approach to eliminate this global phase variation. However, filter size requires careful selection in order to preserve image contrast and avoid errors resulting from residual phase wraps. An alternative approach is to apply phase unwrapping prior to high pass filtering. A suitable phase unwrapping algorithm guarantees no residual phase wraps but additional computational steps are required. In this work, we quantitatively evaluate these two phase processing approaches on both simulated and real data using different filters and cutoff frequencies. Our analysis leads to an improved understanding of the relationship between phase wraps, susceptibility effects, and acquisition parameters. Although homodyne filtering approaches are faster and more straightforward, phase unwrapping approaches perform more accurately in a wider variety of acquisition scenarios.
NASA Technical Reports Server (NTRS)
Ong, Cindy; Mueller, Andreas; Thome, Kurtis; Pierce, Leland E.; Malthus, Timothy
2016-01-01
Calibration is the process of quantitatively defining a system's responses to known, controlled signal inputs, and validation is the process of assessing, by independent means, the quality of the data products derived from those system outputs [1]. Similar to other Earth observation (EO) sensors, the calibration and validation of spaceborne imaging spectroscopy sensors is a fundamental underpinning activity. Calibration and validation determine the quality and integrity of the data provided by spaceborne imaging spectroscopy sensors and have enormous downstream impacts on the accuracy and reliability of products generated from these sensors. At least five imaging spectroscopy satellites are planned to be launched within the next five years, with the two most advanced scheduled to be launched in the next two years [2]. The launch of these sensors requires the establishment of suitable, standardized, and harmonized calibration and validation strategies to ensure that high-quality data are acquired and comparable between these sensor systems. Such activities are extremely important for the community of imaging spectroscopy users. Recognizing the need to focus on this underpinning topic, the Geoscience Spaceborne Imaging Spectroscopy (previously, the International Spaceborne Imaging Spectroscopy) Technical Committee launched a calibration and validation initiative at the 2013 International Geoscience and Remote Sensing Symposium (IGARSS) in Melbourne, Australia, and a post-conference activity of a vicarious calibration field trip at Lake Lefroy in Western Australia.
Bio-inspired color image enhancement
NASA Astrophysics Data System (ADS)
Meylan, Laurence; Susstrunk, Sabine
2004-06-01
Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.
NASA Astrophysics Data System (ADS)
Nolte, David D.
2016-03-01
Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.
Image quality assessment metric for frame accumulated image
NASA Astrophysics Data System (ADS)
Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling
2018-01-01
The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.
Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.
Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil
2018-01-25
Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation
NASA Astrophysics Data System (ADS)
Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin
2018-04-01
Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.
Radarsat Antarctic Mapping Project: Antarctic Imaging Campaign 2
NASA Technical Reports Server (NTRS)
2001-01-01
The Radarsat Antarctic Mapping Project is a collaboration between NASA and the Canadian Space Agency to map Antarctica using synthetic aperture radar (SAR). The first Antarctic Mapping Mission (AMM-1) was successfully completed in October 1997. Data from the acquisition phase of the 1997 campaign have been used to achieve the primary goal of producing the first, high-resolution SAR image map of Antarctica. The limited amount of data suitable for interferometric analysis have also been used to produce remarkably detailed maps of surface velocity for a few selected regions. Most importantly, the results from AMM-1 are now available to the general science community in the form of various resolution, radiometrically calibrated and geometrically accurate image mosaics. The second Antarctic imaging campaign occurred during the fall of 2000. Modified from AMM-1, the satellite remained in north looking mode during AMM-2 restricting coverage to regions north of about -80 degrees latitude. But AMM-2 utilized for the first time RADARSAT-1 fine beams providing an unprecedented opportunity to image many of Antarctica's fast glaciers whose extent was revealed through AMM-1 data. AMM-2 also captured extensive data suitable for interferometric analysis of the surface velocity field. This report summarizes the science goals, mission objectives, and project status through the acquisition phase and the start of the processing phase. The reports describes the efforts of team members including Alaska SAR Facility, Jet Propulsion Laboratory, Vexcel Corporation, Goddard Space Flight Center, Wallops Flight Facility, Ohio State University, Environmental Research Institute of Michigan, White Sands Facility, Canadian Space Agency Mission Planning and Operations Groups, and the Antarctic Mapping Planning Group.
Status and Perspectives of Neutron Imaging Facilities
NASA Astrophysics Data System (ADS)
Lehmann, E.; Trtik, P.; Ridikas, D.
The methodology and the application range of neutron imaging techniques have been significantly improved at numerous facilities worldwide in the last decades. This progress has been achieved by new detector systems, the setup of dedicated, optimized and flexible beam lines and the much better understanding of the complete imaging process thanks to complementary simulations. Furthermore, new applications and research topics were found and implemented. However, since the quality and the number of neutron imaging facilities depend much on the access to suitable beam ports, there is still an enormous potential to implement state-of-the-art neutron imaging techniques at many more facilities. On the one hand, there are prominent and powerful sources which do not intend/accept the implementation of neutron imaging techniques due to the priorities set for neutron scattering and irradiation techniques exclusively. On the other hand, there are modern and useful devices which remain under-utilized and have either not the capacity or not the know-how to develop attractive user programs and/or industrial partnerships. In this overview of the international status of neutron imaging facilities, we will specify details about the current situation.
X-ray phase-contrast imaging: the quantum perspective
NASA Astrophysics Data System (ADS)
Slowik, J. M.; Santra, R.
2013-08-01
Time-resolved phase-contrast imaging using ultrafast x-ray sources is an emerging method to investigate ultrafast dynamical processes in matter. Schemes to generate attosecond x-ray pulses have been proposed, bringing electronic timescales into reach and emphasizing the demand for a quantum description. In this paper, we present a method to describe propagation-based x-ray phase-contrast imaging in nonrelativistic quantum electrodynamics. We explain why the standard scattering treatment via Fermi’s golden rule cannot be applied. Instead, the quantum electrodynamical treatment of phase-contrast imaging must be based on a different approach. It turns out that it is essential to select a suitable observable. Here, we choose the quantum-mechanical Poynting operator. We determine the expectation value of our observable and demonstrate that the leading order term describes phase-contrast imaging. It recovers the classical expression of phase-contrast imaging. Thus, it makes the instantaneous electron density of non-stationary electronic states accessible to time-resolved imaging. Interestingly, inelastic (Compton) scattering does automatically not contribute in leading order, explaining the success of the semiclassical description.
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
Wire Detection Algorithms for Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.
2002-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning method) is the need for a very good set of positive and negative examples since the performance depends on the quality of the training set.
NASA Astrophysics Data System (ADS)
Nagai, Yuichi; Kitagawa, Mayumi; Torii, Jun; Iwase, Takumi; Aso, Tomohiko; Ihara, Kanyu; Fujikawa, Mari; Takeuchi, Yumiko; Suzuki, Katsumi; Ishiguro, Takashi; Hara, Akio
2014-03-01
Recently, the double contrast technique in a gastrointestinal examination and the transbronchial lung biopsy in an examination for the respiratory system [1-3] have made a remarkable progress. Especially in the transbronchial lung biopsy, better quality of x-ray fluoroscopic images is requested because this examination is performed under a guidance of x-ray fluoroscopic images. On the other hand, various image processing methods [4] for x-ray fluoroscopic images have been developed as an x-ray system with a flat panel detector [5-7] is widely used. A recursive filtering is an effective method to reduce a random noise in x-ray fluoroscopic images. However it has a limitation for its effectiveness of a noise reduction in case of a moving object exists in x-ray fluoroscopic images because the recursive filtering is a noise reduction method by adding last few images. After recursive filtering a residual signal was produced if a moving object existed in x-ray images, and this residual signal disturbed a smooth procedure of the examinations. To improve this situation, new noise reduction method has been developed. The Adaptive Noise Reduction [ANR] is the brand-new noise reduction technique which can be reduced only a noise regardless of the moving object in x-ray fluoroscopic images. Therefore the ANR is a very suitable noise reduction method for the transbronchial lung biopsy under a guidance of x-ray fluoroscopic images because the residual signal caused of the moving object in x-ray fluoroscopic images is never produced after the ANR. In this paper, we will explain an advantage of the ANR by comparing of a performance between the ANR images and the conventional recursive filtering images.
Implanted Silicon Resistor Layers for Efficient Terahertz Absorption
NASA Technical Reports Server (NTRS)
Chervenak, J. A.; Abrahams, J.; Allen, C. A.; Benford, D. J.; Henry, R.; Stevenson, T.; Wollack, E.; Moseley, S. H.
2005-01-01
Broadband absorption structures are an essential component of large format bolometer arrays for imaging GHz and THz radiation. We have measured electrical and optical properties of implanted silicon resistor layers designed to be suitable for these absorbers. Implanted resistors offer a low-film-stress, buried absorber that is robust to longterm aging, temperature, and subsequent metals processing. Such an absorber layer is readily integrated with superconducting integrated circuits and standard micromachining as demonstrated by the SCUBA II array built by ROE/NIST (1). We present a complete characterization of these layers, demonstrating frequency regimes in which different recipes will be suitable for absorbers. Single layer thin film coatings have been demonstrated as effective absorbers at certain wavelengths including semimetal (2,3), thin metal (4), and patterned metal films (5,6). Astronomical instrument examples include the SHARC II instrument is imaging the submillimeter band using passivated Bi semimetal films and the HAWC instrument for SOFIA, which employs ultrathin metal films to span 1-3 THz. Patterned metal films on spiderweb bolometers have also been proposed for broadband detection. In each case, the absorber structure matches the impedance of free space for optimal absorption in the detector configuration (typically 157 Ohms per square for high absorption with a single or 377 Ohms per square in a resonant cavity or quarter wave backshort). Resonant structures with -20% bandwidth coupled to bolometers are also under development; stacks of such structures may take advantage of instruments imaging over a wide band. Each technique may enable effective absorbers in imagers. However, thin films tend to age, degrade or change during further processing, can be difficult to reproduce, and often exhibit an intrinsic granularity that creates complicated frequency dependence at THz frequencies. Thick metal films are more robust but the requirement for patterning can limit their absorption at THz frequencies and their heat capacity can be high. patterned absorber structures that offer low heat capacity, absence of aging, and uniform, predictable behavior at THz frequencies. We have correlated DC electrical and THz optical measurements of a series of implanted layers and studied their frequency dependence of optical absorption from .3 to 10 THz at cryogenic temperatures. We have modeled the optical response to determine the suitability of the implanted silicon resistor as a function of resistance in the range 10 Ohms/sq to 300 Ohms/sq.
Specimen preparation for high-resolution cryo-EM
Passmore, Lori A.; Russo, Christopher J.
2016-01-01
Imaging a material with electrons at near-atomic resolution requires a thin specimen that is stable in the vacuum of the transmission electron microscope. For biological samples, this comprises a thin layer of frozen aqueous solution containing the biomolecular complex of interest. The process of preparing a high-quality specimen is often the limiting step in the determination of structures by single-particle electron cryomicroscopy (cryo-EM). Here we describe a systematic approach for going from a purified biomolecular complex in aqueous solution to high-resolution electron micrographs that are suitable for 3D structure determination. This includes a series of protocols for the preparation of vitrified specimens on various specimen supports, including all-gold and graphene. We also describe techniques for troubleshooting when a preparation fails to yield suitable specimens, and common mistakes to avoid during each part of the process. Finally, we include recommendations for obtaining the highest quality micrographs from prepared specimens with current microscope, detector and support technology. PMID:27572723
2013-01-15
S48-E-007 (12 Sept 1991) --- Astronaut James F. Buchli, mission specialist, catches snack crackers as they float in the weightless environment of the earth-orbiting Discovery. This image was transmitted by the Electronic Still Camera, Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE
NASA Astrophysics Data System (ADS)
Lal, Cerine; McGrath, James; Subhash, Hrebesh; Rani, Sweta; Ritter, Thomas; Leahy, Martin
2016-03-01
Optical Coherence Tomography (OCT) is a non-invasive 3 dimensional optical imaging modality that enables high resolution cross sectional imaging in biological tissues and materials. Its high axial and lateral resolution combined with high sensitivity, imaging depth and wide field of view makes it suitable for wide variety of high resolution medical imaging applications at clinically relevant speed. With the advent of swept source lasers, the imaging speed of OCT has increased considerably in recent years. OCT has been used in ophthalmology to study dynamic changes occurring in the cornea and iris, thereby providing physiological and pathological changes that occur within the anterior segment structures such as in glaucoma, during refractive surgery, lamellar keratoplasty and corneal diseases. In this study, we assess the changes in corneal thickness in the anterior segment of the eye during wound healing process in a rat corneal burn model following stem cell therapy using high speed swept source OCT.
Electronic Still Camera Project on STS-48
NASA Technical Reports Server (NTRS)
1991-01-01
On behalf of NASA, the Office of Commercial Programs (OCP) has signed a Technical Exchange Agreement (TEA) with Autometric, Inc. (Autometric) of Alexandria, Virginia. The purpose of this agreement is to evaluate and analyze a high-resolution Electronic Still Camera (ESC) for potential commercial applications. During the mission, Autometric will provide unique photo analysis and hard-copy production. Once the mission is complete, Autometric will furnish NASA with an analysis of the ESC s capabilities. Electronic still photography is a developing technology providing the means by which a hand held camera electronically captures and produces a digital image with resolution approaching film quality. The digital image, stored on removable hard disks or small optical disks, can be converted to a format suitable for downlink transmission, or it can be enhanced using image processing software. The on-orbit ability to enhance or annotate high-resolution images and then downlink these images in real-time will greatly improve Space Shuttle and Space Station capabilities in Earth observations and on-board photo documentation.
Development of a Germanium Small-Animal SPECT System
NASA Astrophysics Data System (ADS)
Johnson, Lindsay C.; Ovchinnikov, Oleg; Shokouhi, Sepideh; Peterson, Todd E.
2015-10-01
Advances in fabrication techniques, electronics, and mechanical cooling systems have given rise to germanium detectors suitable for biomedical imaging. We are developing a small-animal SPECT system that uses a double-sided Ge strip detector. The detector's excellent energy resolution may help to reduce scatter and simplify processing of multi-isotope imaging, while its ability to measure depth of interaction has the potential to mitigate parallax error in pinhole imaging. The detector's energy resolution is <; 1% FWHM at 140 keV and its spatial resolution is approximately 1.5 mm FWHM. The prototype system described has a single-pinhole collimator with a 1-mm diameter and a 70-degree opening angle with a focal length variable between 4.5 and 9 cm. Phantom images from the gantry-mounted system are presented, including the NEMA NU-2008 phantom and a hot-rod phantom. Additionally, the benefit of energy resolution is demonstrated by imaging a dual-isotope phantom with 99mTc and 123I without cross-talk correction.
VIEWDEX: an efficient and easy-to-use software for observer performance studies.
Håkansson, Markus; Svensson, Sune; Zachrisson, Sara; Svalkvist, Angelica; Båth, Magnus; Månsson, Lars Gunnar
2010-01-01
The development of investigation techniques, image processing, workstation monitors, analysing tools etc. within the field of radiology is vast, and the need for efficient tools in the evaluation and optimisation process of image and investigation quality is important. ViewDEX (Viewer for Digital Evaluation of X-ray images) is an image viewer and task manager suitable for research and optimisation tasks in medical imaging. ViewDEX is DICOM compatible and the features of the interface (tasks, image handling and functionality) are general and flexible. The configuration of a study and output (for example, answers given) can be edited in any text editor. ViewDEX is developed in Java and can run from any disc area connected to a computer. It is free to use for non-commercial purposes and can be downloaded from http://www.vgregion.se/sas/viewdex. In the present work, an evaluation of the efficiency of ViewDEX for receiver operating characteristic (ROC) studies, free-response ROC (FROC) studies and visual grading (VG) studies was conducted. For VG studies, the total scoring rate was dependent on the number of criteria per case. A scoring rate of approximately 150 cases h(-1) can be expected for a typical VG study using single images and five anatomical criteria. For ROC and FROC studies using clinical images, the scoring rate was approximately 100 cases h(-1) using single images and approximately 25 cases h(-1) using image stacks ( approximately 50 images case(-1)). In conclusion, ViewDEX is an efficient and easy-to-use software for observer performance studies.
Hu, Jian Zhi [Richland, WA; Sears, Jr., Jesse A.; Hoyt, David W [Richland, WA; Wind, Robert A [Kennewick, WA
2009-05-19
Described are a "Discrete Magic Angle Turning" (DMAT) system, devices, and processes that combine advantages of both magic angle turning (MAT) and magic angle hopping (MAH) suitable, e.g., for in situ magnetic resonance spectroscopy and/or imaging. In an exemplary system, device, and process, samples are rotated in a clockwise direction followed by an anticlockwise direction of exactly the same amount. Rotation proceeds through an angle that is typically greater than about 240 degrees but less than or equal to about 360 degrees at constant speed for a time applicable to the evolution dimension. Back and forth rotation can be synchronized and repeated with a special radio frequency (RF) pulse sequence to produce an isotropic-anisotropic shift 2D correlation spectrum. The design permits tubes to be inserted into the sample container without introducing plumbing interferences, further allowing control over such conditions as temperature, pressure, flow conditions, and feed compositions, thus permitting true in-situ investigations to be carried out.
Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision
NASA Astrophysics Data System (ADS)
Hendrawan, Y.; Hawa, L. C.; Damayanti, R.
2018-03-01
This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.
Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meredith, J; Conger, J; Liu, Y
2005-11-11
Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relativemore » to a desktop Pentium 4 CPU.« less
Some new classification methods for hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia
2006-10-01
Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.
Custom large scale integrated circuits for spaceborne SAR processors
NASA Technical Reports Server (NTRS)
Tyree, V. C.
1978-01-01
The application of modern LSI technology to the development of a time-domain azimuth correlator for SAR processing is discussed. General design requirements for azimuth correlators for missions such as SEASAT-A, Venus orbital imaging radar (VOIR), and shuttle imaging radar (SIR) are summarized. Several azimuth correlator architectures that are suitable for implementation using custom LSI devices are described. Technical factors pertaining to selection of appropriate LSI technologies are discussed, and the maturity of alternative technologies for spacecraft applications are reported in the context of expected space mission launch dates. The preliminary design of a custom LSI time-domain azimuth correlator device (ACD) being developed for use in future SAR processors is detailed.
NASA Astrophysics Data System (ADS)
Sumriddetchkajorn, Sarun; Chaitavon, Kosom
2009-07-01
This paper introduces a parallel measurement approach for fast infrared-based human temperature screening suitable for use in a large public area. Our key idea is based on the combination of simple image processing algorithms, infrared technology, and human flow management. With this multidisciplinary concept, we arrange as many people as possible in a two-dimensional space in front of a thermal imaging camera and then highlight all human facial areas through simple image filtering, image morphological, and particle analysis processes. In this way, an individual's face in live thermal image can be located and the maximum facial skin temperature can be monitored and displayed. Our experiment shows a measured 1 ms processing time in highlighting all human face areas. With a thermal imaging camera having an FOV lens of 24° × 18° and 320 × 240 active pixels, the maximum facial skin temperatures from three people's faces located at 1.3 m from the camera can also be simultaneously monitored and displayed in a measured rate of 31 fps, limited by the looping process in determining coordinates of all faces. For our 3-day test under the ambient temperature of 24-30 °C, 57-72% relative humidity, and weak wind from the outside hospital building, hyperthermic patients can be identified with 100% sensitivity and 36.4% specificity when the temperature threshold level and the offset temperature value are appropriately chosen. Appropriately locating our system away from the building doors, air conditioners and electric fans in order to eliminate wind blow coming toward the camera lens can significantly help improve our system specificity.
Advances in dual-tone development for pitch frequency doubling
NASA Astrophysics Data System (ADS)
Fonseca, Carlos; Somervell, Mark; Scheer, Steven; Kuwahara, Yuhei; Nafus, Kathleen; Gronheid, Roel; Tarutani, Shinji; Enomoto, Yuuichiro
2010-04-01
Dual-tone development (DTD) has been previously proposed as a potential cost-effective double patterning technique1. DTD was reported as early as in the late 1990's2. The basic principle of dual-tone imaging involves processing exposed resist latent images in both positive tone (aqueous base) and negative tone (organic solvent) developers. Conceptually, DTD has attractive cost benefits since it enables pitch doubling without the need for multiple etch steps of patterned resist layers. While the concept for DTD technique is simple to understand, there are many challenges that must be overcome and understood in order to make it a manufacturing solution. Previous work by the authors demonstrated feasibility of DTD imaging for 50nm half-pitch features at 0.80NA (k1 = 0.21) and discussed challenges lying ahead for printing sub-40nm half-pitch features with DTD. While previous experimental results suggested that clever processing on the wafer track can be used to enable DTD beyond 50nm halfpitch, it also suggest that identifying suitable resist materials or chemistries is essential for achieving successful imaging results with novel resist processing methods on the wafer track. In this work, we present recent advances in the search for resist materials that work in conjunction with novel resist processing methods on the wafer track to enable DTD. Recent experimental results with new resist chemistries, specifically designed for DTD, are presented in this work. We also present simulation studies that help and support identifying resist properties that could enable DTD imaging, which ultimately lead to producing viable DTD resist materials.
A New Dusts Sensor for Cultural Heritage Applications Based on Image Processing
Proietti, Andrea; Leccese, Fabio; Caciotta, Maurizio; Morresi, Fabio; Santamaria, Ulderico; Malomo, Carmela
2014-01-01
In this paper, we propose a new sensor for the detection and analysis of dusts (seen as powders and fibers) in indoor environments, especially designed for applications in the field of Cultural Heritage or in other contexts where the presence of dust requires special care (surgery, clean rooms, etc.). The presented system relies on image processing techniques (enhancement, noise reduction, segmentation, metrics analysis) and it allows obtaining both qualitative and quantitative information on the accumulation of dust. This information aims to identify the geometric and topological features of the elements of the deposit. The curators can use this information in order to design suitable prevention and maintenance actions for objects and environments. The sensor consists of simple and relatively cheap tools, based on a high-resolution image acquisition system, a preprocessing software to improve the captured image and an analysis algorithm for the feature extraction and the classification of the elements of the dust deposit. We carried out some tests in order to validate the system operation. These tests were performed within the Sistine Chapel in the Vatican Museums, showing the good performance of the proposed sensor in terms of execution time and classification accuracy. PMID:24901977
Handheld ultrasound array imaging device
NASA Astrophysics Data System (ADS)
Hwang, Juin-Jet; Quistgaard, Jens
1999-06-01
A handheld ultrasound imaging device, one that weighs less than five pounds, has been developed for diagnosing trauma in the combat battlefield as well as a variety of commercial mobile diagnostic applications. This handheld device consists of four component ASICs, each is designed using the state of the art microelectronics technologies. These ASICs are integrated with a convex array transducer to allow high quality imaging of soft tissues and blood flow in real time. The device is designed to be battery driven or ac powered with built-in image storage and cineloop playback capability. Design methodologies of a handheld device are fundamentally different to those of a cart-based system. As system architecture, signal and image processing algorithm as well as image control circuit and software in this device is deigned suitably for large-scale integration, the image performance of this device is designed to be adequate to the intent applications. To elongate the battery life, low power design rules and power management circuits are incorporated in the design of each component ASIC. The performance of the prototype device is currently being evaluated for various applications such as a primary image screening tool, fetal imaging in Obstetrics, foreign object detection and wound assessment for emergency care, etc.
Time-lapse microscopy and image processing for stem cell research: modeling cell migration
NASA Astrophysics Data System (ADS)
Gustavsson, Tomas; Althoff, Karin; Degerman, Johan; Olsson, Torsten; Thoreson, Ann-Catrin; Thorlin, Thorleif; Eriksson, Peter
2003-05-01
This paper presents hardware and software procedures for automated cell tracking and migration modeling. A time-lapse microscopy system equipped with a computer controllable motorized stage was developed. The performance of this stage was improved by incorporating software algorithms for stage motion displacement compensation and auto focus. The microscope is suitable for in-vitro stem cell studies and allows for multiple cell culture image sequence acquisition. This enables comparative studies concerning rate of cell splits, average cell motion velocity, cell motion as a function of cell sample density and many more. Several cell segmentation procedures are described as well as a cell tracking algorithm. Statistical methods for describing cell migration patterns are presented. In particular, the Hidden Markov Model (HMM) was investigated. Results indicate that if the cell motion can be described as a non-stationary stochastic process, then the HMM can adequately model aspects of its dynamic behavior.
Bittorf, A.; Diepgen, T. L.
1996-01-01
The World Wide Web (WWW) is becoming the major way of acquiring information in all scientific disciplines as well as in business. It is very well suitable for fast distribution and exchange of up to date teaching resources. However, to date most teaching applications on the Web do not use its full power by integrating interactive components. We have set up a computer based training (CBT) framework for Dermatology, which consists of dynamic lecture scripts, case reports, an atlas and a quiz system. All these components heavily rely on an underlying image database that permits the creation of dynamic documents. We used a demon process that keeps the database open and can be accessed using HTTP to achieve better performance and avoid the overhead involved by starting CGI-processes. The result of our evaluation was very encouraging. Images Figure 3 PMID:8947625
A detail enhancement and dynamic range adjustment algorithm for high dynamic range images
NASA Astrophysics Data System (ADS)
Xu, Bo; Wang, Huachuang; Liang, Mingtao; Yu, Cong; Hu, Jinlong; Cheng, Hua
2014-08-01
Although high dynamic range (HDR) images contain large amounts of information, they have weak texture and low contrast. What's more, these images are difficult to be reproduced on low dynamic range displaying mediums. If much more information is to be acquired when these images are displayed on PCs, some specific transforms, such as compressing the dynamic range, enhancing the portions of little difference in original contrast and highlighting the texture details on the premise of keeping the parts of large contrast, are needed. To this ends, a multi-scale guided filter enhancement algorithm which derives from the single-scale guided filter based on the analysis of non-physical model is proposed in this paper. Firstly, this algorithm decomposes the original HDR images into base image and detail images of different scales, and then it adaptively selects a transform function which acts on the enhanced detail images and original images. By comparing the treatment effects of HDR images and low dynamic range (LDR) images of different scene features, it proves that this algorithm, on the basis of maintaining the hierarchy and texture details of images, not only improves the contrast and enhances the details of images, but also adjusts the dynamic range well. Thus, it is much suitable for human observation or analytical processing of machines.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
Ramírez-Miquet, Evelio E; Cabrera, Humberto; Grassi, Hilda C; de J Andrades, Efrén; Otero, Isabel; Rodríguez, Dania; Darias, Juan G
2017-08-01
This paper reports on the biospeckle processing of biological activity using a visualization scheme based upon the digital imaging information technology. Activity relative to bacterial growth in agar plates and to parasites affected by a drug is monitored via the speckle patterns generated by a coherent source incident on the microorganisms. We present experimental results to demonstrate the potential application of this methodology for following the activity in time. The digital imaging information technology is an alternative visualization enabling the study of speckle dynamics, which is correlated to the activity of bacteria and parasites. In this method, the changes in Red-Green-Blue (RGB) color component density are considered as markers of the growth of bacteria and parasites motility in presence of a drug. The RGB data was used to generate a two-dimensional surface plot allowing an analysis of color distribution on the speckle images. The proposed visualization is compared to the outcomes of the generalized differences and the temporal difference. A quantification of the activity is performed using a parameterization of the temporal difference method. The adopted digital image processing technique has been found suitable to monitor motility and morphological changes in the bacterial population over time and to detect and distinguish a short term drug action on parasites.
Pierzyńska-Mach, Agnieszka; Janowski, Paweł A; Dobrucki, Jurek W
2014-08-01
Acidic vesicles can be imaged and tracked in live cells after staining with several low molecular weight fluorescent probes, or with fluorescently labeled proteins. Three fluorescent dyes, acridine orange, LysoTracker Red DND-99, and quinacrine, were evaluated as acidic vesicle tracers for confocal fluorescence imaging and quantitative analysis. The stability of fluorescent signals, achievable image contrast, and phototoxicity were taken into consideration. The three tested tracers exhibit different advantages and pose different problems in imaging experiments. Acridine orange makes it possible to distinguish acidic vesicles with different internal pH but is fairly phototoxic and can cause spectacular bursts of the dye-loaded vesicles. LysoTracker Red is less phototoxic but its rapid photobleaching limits the range of useful applications considerably. We demonstrate that quinacrine is most suitable for long-term imaging when a high number of frames is required. This capacity made it possible to trace acidic vesicles for several hours, during a process of drug-induced apoptosis. An ability to record the behavior of acidic vesicles over such long periods opens a possibility to study processes like autophagy or long-term effects of drugs on endocytosis and exocytosis. © 2014 International Society for Advancement of Cytometry.
Lu, Hoang D; Lim, Tristan L; Javitt, Shoshana; Heinmiller, Andrew; Prud'homme, Robert K
2017-06-12
Optical imaging is a rapidly progressing medical technique that can benefit from the development of new and improved optical imaging agents suitable for use in vivo. However, the molecular rules detailing what optical agents can be processed and encapsulated into in vivo presentable forms are not known. We here present the screening of series of highly hydrophobic porphyrin, phthalocyanine, and naphthalocyanine dye macrocycles through a self-assembling Flash NanoPrecipitation process to form a series of water dispersible dye nanoparticles (NPs). Ten out of 19 tested dyes could be formed into poly(ethylene glycol) coated nanoparticles 60-150 nm in size, and these results shed insight on dye structural criteria that are required to permit dye assembly into NPs. Dye NPs display a diverse range of absorbance profiles with absorbance maxima within the NIR region, and have absorbance that can be tuned by varying dye choice or by doping bulking materials in the NP core. Particle properties such as dye core load and the compositions of co-core dopants were varied, and subsequent effects on photoacoustic and fluorescence signal intensities were measured. These results provide guidelines for designing NPs optimized for photoacoustic imaging and NPs optimized for fluorescence imaging. This work provides important details for dye NP engineering, and expands the optical imaging tools available for use.
Henzler, Katja; Heilemann, Axel; Kneer, Janosch; Guttmann, Peter; Jia, He; Bartsch, Eckhard; Lu, Yan; Palzer, Stefan
2015-01-01
In order to take full advantage of novel functional materials in the next generation of sensorial devices scalable processes for their fabrication and utilization are of great importance. Also understanding the processes lending the properties to those materials is essential. Among the most sought-after sensor applications are low-cost, highly sensitive and selective metal oxide based gas sensors. Yet, the surface reactions responsible for provoking a change in the electrical behavior of gas sensitive layers are insufficiently comprehended. Here, we have used near-edge x-ray absorption fine structure spectroscopy in combination with x-ray microscopy (NEXAFS-TXM) for ex-situ measurements, in order to reveal the hydrogen sulfide induced processes at the surface of copper oxide nanoparticles, which are ultimately responsible for triggering a percolation phase transition. For the first time these measurements allow the imaging of trace gas induced reactions and the effect they have on the chemical composition of the metal oxide surface and bulk. This makes the new technique suitable for elucidating adsorption processes in-situ and under real operating conditions. PMID:26631608
Patwary, Nurmohammed; Doblas, Ana; Preza, Chrysanthe
2018-01-01
The performance of structured illumination microscopy (SIM) is hampered in many biological applications due to the inability to modulate the light when imaging deep into the sample. This is in part because sample-induced aberration reduces the modulation contrast of the structured pattern. In this paper, we present an image restoration approach suitable for processing raw incoherent-grid-projection SIM data with a low fringe contrast. Restoration results from simulated and experimental ApoTome SIM data show results with improved signal-to-noise ratio (SNR) and optical sectioning compared to the results obtained from existing methods, such as 2D demodulation and 3D SIM deconvolution. Our proposed method provides satisfactory results (quantified by the achieved SNR and normalized mean square error) even when the modulation contrast of the illumination pattern is as low as 7%. PMID:29675307
Applying LED in full-field optical coherence tomography for gastrointestinal endoscopy
NASA Astrophysics Data System (ADS)
Yang, Bor-Wen; Wang, Yu-Yen; Juan, Yu-Shan; Hsu, Sheng-Jie
2015-08-01
Optical coherence tomography (OCT) has become an important medical imaging technology due to its non-invasiveness and high resolution. Full-field optical coherence tomography (FF-OCT) is a scanning scheme especially suitable for en face imaging as it employs a CMOS/CCD device for parallel pixels processing. FF-OCT can also be applied to high-speed endoscopic imaging. Applying cylindrical scanning and a right-angle prism, we successfully obtained a 360° tomography of the inner wall of an intestinal cavity through an FF-OCT system with an LED source. The 10-μm scale resolution enables the early detection of gastrointestinal lesions, which can increase detection rates for esophageal, stomach, or vaginal cancer. All devices used in this system can be integrated by MOEMS technology to contribute to the studies of gastrointestinal medicine and advanced endoscopy technology.
Leucocyte classification for leukaemia detection using image processing techniques.
Putzu, Lorenzo; Caocci, Giovanni; Di Ruberto, Cecilia
2014-11-01
The counting and classification of blood cells allow for the evaluation and diagnosis of a vast number of diseases. The analysis of white blood cells (WBCs) allows for the detection of acute lymphoblastic leukaemia (ALL), a blood cancer that can be fatal if left untreated. Currently, the morphological analysis of blood cells is performed manually by skilled operators. However, this method has numerous drawbacks, such as slow analysis, non-standard accuracy, and dependences on the operator's skill. Few examples of automated systems that can analyse and classify blood cells have been reported in the literature, and most of these systems are only partially developed. This paper presents a complete and fully automated method for WBC identification and classification using microscopic images. In contrast to other approaches that identify the nuclei first, which are more prominent than other components, the proposed approach isolates the whole leucocyte and then separates the nucleus and cytoplasm. This approach is necessary to analyse each cell component in detail. From each cell component, different features, such as shape, colour and texture, are extracted using a new approach for background pixel removal. This feature set was used to train different classification models in order to determine which one is most suitable for the detection of leukaemia. Using our method, 245 of 267 total leucocytes were properly identified (92% accuracy) from 33 images taken with the same camera and under the same lighting conditions. Performing this evaluation using different classification models allowed us to establish that the support vector machine with a Gaussian radial basis kernel is the most suitable model for the identification of ALL, with an accuracy of 93% and a sensitivity of 98%. Furthermore, we evaluated the goodness of our new feature set, which displayed better performance with each evaluated classification model. The proposed method permits the analysis of blood cells automatically via image processing techniques, and it represents a medical tool to avoid the numerous drawbacks associated with manual observation. This process could also be used for counting, as it provides excellent performance and allows for early diagnostic suspicion, which can then be confirmed by a haematologist through specialised techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
A Multiscale Surface Water Temperature Data Acquisition Platform: Tests on Lake Geneva, Switzerland
NASA Astrophysics Data System (ADS)
Barry, D. A.; Irani Rahaghi, A.; Lemmin, U.; Riffler, M.; Wunderle, S.
2015-12-01
An improved understanding of surface transport processes is necessary to predict sediment, pollutant and phytoplankton patterns in large lakes. Lake surface water temperature (LSWT), which varies in space and time, reflects meteorological and climatological forcing more than any other physical lake parameter. There are different data sources for LSWT mapping, including remote sensing and in situ measurements. Satellite data can be suitable for detecting large-scale thermal patterns, but not meso- or small scale processes. Lake surface thermography, investigated in this study, has finer resolution compared to satellite images. Thermography at the meso-scale provides the ability to ground-truth satellite imagery over scales of one to several satellite image pixels. On the other hand, thermography data can be used as a control in schemes to upscale local measurements that account for surface energy fluxes and the vertical energy budget. Independently, since such data can be collected at high frequency, they can be also useful in capturing changes in the surface signatures of meso-scale eddies and thus to quantify mixing processes. In the present study, we report results from a Balloon Launched Imaging and Monitoring Platform (BLIMP), which was developed in order to measure the LSWT at meso-scale. The BLIMP consists of a small balloon that is tethered to a boat and equipped with thermal and RGB cameras, as well as other instrumentation for location and communication. Several deployments were carried out on Lake Geneva. In a typical deployment, the BLIMP is towed by a boat, and collects high frequency data from different heights (i.e., spatial resolutions) and locations. Simultaneous ground-truthing of the BLIMP data is achieved using an autonomous craft that collects a variety of data, including in situ surface/near surface temperatures, radiation and meteorological data in the area covered by the BLIMP images. With suitable scaling, our results show good consistency between in situ, BLIMP and concurrent satellite data. In addition, the BLIMP thermography reveals (hydrodynamically-driven) structures in the LSWT - an obvious example being mixing of river discharges.
Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren
2018-03-14
Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.
Fibre optic confocal imaging (FOCI) for subsurface microscopy of the colon in vivo.
Delaney, P M; King, R G; Lambert, J R; Harris, M R
1994-01-01
Fibre optic confocal imaging (FOCI) is a new type of microscopy which has been recently developed (Delaney et al. 1993). In contrast to conventional light microscopy, FOCI and other confocal techniques allow clear imaging of subsurface structures within translucent objects. However, unlike conventional confocal microscopes which are bulky (because of a need for accurate alignment of large components) FOCI allows the imaging end to be miniaturised and relatively mobile. FOCI is thus particularly suited for clear subsurface imaging of structures within living animals or subjects. The aim of the present study was to assess the suitability of using FOCI for imaging of subsurface structures within the colon, both in vitro (human and rat biopsies) and in vivo (in rats). Images were obtained in fluorescence mode (excitation 488 nm, detection above 515 nm) following topical application of fluorescein. By this technique the glandular structure of the colon was imaged. FOCI is thus suitable for subsurface imaging of the colon in vivo. Images Fig. 2 Fig. 3 PMID:8157487
NASA Astrophysics Data System (ADS)
Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias
2012-06-01
Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.
Galaxy evolution in the densest environments: HST imaging
NASA Astrophysics Data System (ADS)
Jorgensen, Inger
2013-10-01
We propose to process in a consistent fashion all available HST/ACS and WFC3 imaging of seven rich clusters of galaxies at z=1.2-1.6. The clusters are part of our larger project aimed at constraining models for galaxy evolution in dense environments from observations of stellar populations in rich z=1.2-2 galaxy clusters. The main objective is to establish the star formation {SF} history and structural evolution over this epoch during which large changes in SF rates and galaxy structure are expected to take place in cluster galaxies.The observational data required to meet our main objective are deep HST imaging and high S/N spectroscopy of individual cluster members. The HST imaging already exists for the seven rich clusters at z=1.2-1.6 included in this archive proposal. However, the data have not been consistently processed to derive colors, magnitudes, sizes and morphological parameters for all potential cluster members bright enough to be suitable for spectroscopic observations with 8-m class telescopes. We propose to carry out this processing and make all derived parameters publicly available. We will use the parameters derived from the HST imaging to {1} study the structural evolution of the galaxies, {2} select clusters and galaxies for spectroscopic observations, and {3} use the photometry and spectroscopy together for a unified analysis aimed at the SF history and structural changes. The analysis will also utilize data from the Gemini/HST Cluster Galaxy Project, which covers rich clusters at z=0.2-1.0 and for which we have similar HST imaging and high S/N spectroscopy available.
Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan
2015-11-01
To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features
NASA Astrophysics Data System (ADS)
Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija
2017-04-01
We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.
Applied learning-based color tone mapping for face recognition in video surveillance system
NASA Astrophysics Data System (ADS)
Yew, Chuu Tian; Suandi, Shahrel Azmin
2012-04-01
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.
Gopakumar, Gopalakrishna Pillai; Swetha, Murali; Sai Siva, Gorthi; Sai Subrahmanyam, Gorthi R K
2018-03-01
The present paper introduces a focus stacking-based approach for automated quantitative detection of Plasmodium falciparum malaria from blood smear. For the detection, a custom designed convolutional neural network (CNN) operating on focus stack of images is used. The cell counting problem is addressed as the segmentation problem and we propose a 2-level segmentation strategy. Use of CNN operating on focus stack for the detection of malaria is first of its kind, and it not only improved the detection accuracy (both in terms of sensitivity [97.06%] and specificity [98.50%]) but also favored the processing on cell patches and avoided the need for hand-engineered features. The slide images are acquired with a custom-built portable slide scanner made from low-cost, off-the-shelf components and is suitable for point-of-care diagnostics. The proposed approach of employing sophisticated algorithmic processing together with inexpensive instrumentation can potentially benefit clinicians to enable malaria diagnosis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stereo matching and view interpolation based on image domain triangulation.
Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce
2013-09-01
This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.
Reflective liquid crystal light valve with hybrid field effect mode
NASA Technical Reports Server (NTRS)
Boswell, Donald D. (Inventor); Grinberg, Jan (Inventor); Jacobson, Alexander D. (Inventor); Myer, Gary D. (Inventor)
1977-01-01
There is disclosed a high performance reflective mode liquid crystal light valve suitable for general image processing and projection and particularly suited for application to real-time coherent optical data processing. A preferred example of the device uses a CdS photoconductor, a CdTe light absorbing layer, a dielectric mirror, and a liquid crystal layer sandwiched between indium-tin-oxide transparent electrodes deposited on optical quality glass flats. The non-coherent light image is directed onto the photoconductor; this reduces the impedance of the photoconductor, thereby switching the AC voltage that is impressed across the electrodes onto the liquid crystal to activate the device. The liquid crystal is operated in a hybrid field effect mode. It utilizes the twisted nematic effect to create a dark off-state (voltage off the liquid crystal) and the optical birefringence effect to create the bright on-state. The liquid crystal thus modulates the polarization of the coherent read-out or projection light responsively to the non-coherent image. An analyzer is used to create an intensity modulated output beam.
Ferraz, Eduardo Gomes; Andrade, Lucio Costa Safira; dos Santos, Aline Rode; Torregrossa, Vinicius Rabelo; Rubira-Bullen, Izabel Regina Fischer; Sarmento, Viviane Almeida
2013-12-01
The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols ("outline only" and "all-boundary lines"). Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %. The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24). During the designing of a virtual 3D reconstruction, both "outline only" and "all-boundary lines" segmentation protocols can be used. Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.
NASA Astrophysics Data System (ADS)
Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles
2008-01-01
The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.
Large-field-of-view wide-spectrum artificial reflecting superposition compound eyes
NASA Astrophysics Data System (ADS)
Huang, Chi-Chieh
The study of the imaging principles of natural compound eyes has become an active area of research and has fueled the advancement of modern optics with many attractive design features beyond those available with conventional technologies. Most prominent among all compound eyes is the reflecting superposition compound eyes (RSCEs) found in some decapods. They are extraordinary imaging systems with numerous optical features such as minimum chromatic aberration, wide-angle field of view (FOV), high sensitivity to light and superb acuity to motion. Inspired by their remarkable visual system, we were able to implement the unique lens-free, reflection-based imaging mechanisms into a miniaturized, large-FOV optical imaging device operating at the wide visible spectrum to minimize chromatic aberration without any additional post-image processing. First, two micro-transfer printing methods, a multiple and a shear-assisted transfer printing technique, were studied and discussed to realize life-sized artificial RSCEs. The processes exploited the differential adhesive tendencies of the microstructures formed between a donor and a transfer substrate to accomplish an efficient release and transfer process. These techniques enabled conformal wrapping of three-dimensional (3-D) microstructures, initially fabricated in two-dimensional (2-D) layouts with standard fabrication technology onto a wide range of surfaces with complex and curvilinear shapes. Final part of this dissertation was focused on implementing the key operational features of the natural RSCEs into large-FOV, wide-spectrum artificial RSCEs as an optical imaging device suitable for the wide visible spectrum. Our devices can form real, clear images based on reflection rather than refraction, hence avoiding chromatic aberration due to dispersion by the optical materials. Compared to the performance of conventional refractive lenses of comparable size, our devices demonstrated minimum chromatic aberration, exceptional FOV up to 165o without distortion, modest spherical aberrations and comparable imaging quality without any post-image processing. Together with an augmenting cruciform pattern surrounding each focused image, our devices possessed enhanced, dynamic motion-tracking capability ideal for diverse applications in military, security, search and rescue, night navigation, medical imaging and astronomy. In the future, due to its reflection-based operating principles, it can be further extended into mid- and far-infrared for more demanding applications.
Enriching text with images and colored light
NASA Astrophysics Data System (ADS)
Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon
2008-01-01
We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.
A novel false color mapping model-based fusion method of visual and infrared images
NASA Astrophysics Data System (ADS)
Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu
2013-12-01
A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.
Providing Internet Access to High-Resolution Lunar Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
Super-Resolution Imaging Strategies for Cell Biologists Using a Spinning Disk Microscope
Hosny, Neveen A.; Song, Mingying; Connelly, John T.; Ameer-Beg, Simon; Knight, Martin M.; Wheeler, Ann P.
2013-01-01
In this study we use a spinning disk confocal microscope (SD) to generate super-resolution images of multiple cellular features from any plane in the cell. We obtain super-resolution images by using stochastic intensity fluctuations of biological probes, combining Photoactivation Light-Microscopy (PALM)/Stochastic Optical Reconstruction Microscopy (STORM) methodologies. We compared different image analysis algorithms for processing super-resolution data to identify the most suitable for analysis of particular cell structures. SOFI was chosen for X and Y and was able to achieve a resolution of ca. 80 nm; however higher resolution was possible >30 nm, dependant on the super-resolution image analysis algorithm used. Our method uses low laser power and fluorescent probes which are available either commercially or through the scientific community, and therefore it is gentle enough for biological imaging. Through comparative studies with structured illumination microscopy (SIM) and widefield epifluorescence imaging we identified that our methodology was advantageous for imaging cellular structures which are not immediately at the cell-substrate interface, which include the nuclear architecture and mitochondria. We have shown that it was possible to obtain two coloured images, which highlights the potential this technique has for high-content screening, imaging of multiple epitopes and live cell imaging. PMID:24130668
Multicriteria analysis for sources of renewable energy using data from remote sensing
NASA Astrophysics Data System (ADS)
Matejicek, L.
2015-04-01
Renewable energy sources are major components of the strategy to reduce harmful emissions and to replace depleting fossil energy resources. Data from remote sensing can provide information for multicriteria analysis for sources of renewable energy. Advanced land cover quantification makes it possible to search for suitable sites. Multicriteria analysis, together with other data, is used to determine the energy potential and socially acceptability of suggested locations. The described case study is focused on an area of surface coal mines in the northwestern region of the Czech Republic, where the impacts of surface mining and reclamation constitute a dominant force in land cover changes. High resolution satellite images represent the main input datasets for identification of suitable sites. Solar mapping, wind predictions, the location of weirs in watersheds, road maps and demographic information complement the data from remote sensing for multicriteria analysis, which is implemented in a geographic information system (GIS). The input spatial datasets for multicriteria analysis in GIS are reclassified to a common scale and processed with raster algebra tools to identify suitable sites for sources of renewable energy. The selection of suitable sites is limited by the CORINE land cover database to mining and agricultural areas. The case study is focused on long term land cover changes in the 1985-2015 period. Multicriteria analysis based on CORINE data shows moderate changes in mapping of suitable sites for utilization of selected sources of renewable energy in 1990, 2000, 2006 and 2012. The results represent map layers showing the energy potential on a scale of a few preference classes (1-7), where the first class is linked to minimum preference and the last class to maximum preference. The attached histograms show the moderate variability of preference classes due to land cover changes caused by mining activities. The results also show a slight increase in the more preferred classes for utilization of sources of renewable energy due to an increase area of reclaimed sites. Using data from remote sensing, such as the multispectral images and the CORINE land cover datasets, can reduce the financial resources currently required for finding and assessing suitable areas.
Development of 10×10 Matrix-anode MCP-PMT
NASA Astrophysics Data System (ADS)
Yang, Jie; Li, Yongbin; Xu, Pengxiao; Zhao, Wenjin
2018-02-01
10×10 matrix-anode is developed by high-temperature co-fired ceramics (HTCC) technology. Based on the new matrix-anode, a new kind of photon counting imaging detector - 10×10 matrix-anode MCP-PMT is developed, and its performance parameters are tested. HTCC technology is suitable for the MCP-PMT's air impermeability and its baking process. Its response uniformity is better than the metal-ceramic or metal-glass sealing anode, and it is also a promising method to realize a higher density matrix-anode.
Proposal of an Algorithm to Synthesize Music Suitable for Dance
NASA Astrophysics Data System (ADS)
Morioka, Hirofumi; Nakatani, Mie; Nishida, Shogo
This paper proposes an algorithm for synthesizing music suitable for emotions in moving pictures. Our goal is to support multi-media content creation; web page design, animation films and so on. Here we adopt a human dance as a moving picture to examine the availability of our method. Because we think the dance image has high affinity with music. This algorithm is composed of three modules. The first is the module for computing emotions from an input dance image, the second is for computing emotions from music in the database and the last is for selecting music suitable for input dance via an interface of emotion.
The artificial retina processor for track reconstruction at the LHC crossing rate
Abba, A.; Bedeschi, F.; Citterio, M.; ...
2015-03-16
We present results of an R&D study for a specialized processor capable of precisely reconstructing, in pixel detectors, hundreds of charged-particle tracks from high-energy collisions at 40 MHz rate. We apply a highly parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature, and describe in detail an efficient hardware implementation in high-speed, high-bandwidth FPGA devices. This is the first detailed demonstration of reconstruction of offline-quality tracks at 40 MHz and makes the device suitable for processing Large Hadron Collider events at the full crossing frequency.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2015-02-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2017-01-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353
Hänni, Mari; Edvardsson, H; Wågberg, M; Pettersson, K; Smedby, O
2004-01-01
The need for a quantitative method to assess atherosclerosis in vivo is well known. This study tested, in a familiar animal model of atherosclerosis, a combination of magnetic resonance imaging (MRI) and image processing. Six spontaneously hyperlipidemic (Watanabe) rabbits were examined with a knee coil in a 1.5-T clinical MRI scanner. Inflow angio (2DI) and proton density weighted (PDW) images were acquired to examine 10 cm of the aorta immediately cranial to the aortic bifurcation. Examination of the thoracic aorta was added in four animals. To identify the inner and outer boundary of the arterial wall, a dynamic contour algorithm (Gradient Vector Flow snakes) was applied to the 2DI and PDW images, respectively, after which the vessel wall area was calculated. The results were compared with histopathological measurements of intima and intima-media cross-sectional area. The correlation coefficient between wall area measurements with MRI snakes and intima-media area was 0.879 when computed individual-wise for abdominal aortas, 0.958 for thoracic aortas, and 0.834 when computed segment-wise. When the algorithm was applied to the PDW images only, somewhat lower correlations were obtained. The MRI yielded significantly higher values than histopathology, which excludes the adventitia. Magnetic resonance imaging, in combination with dynamic contours, may be a suitable technique for quantitative assessment of atherosclerosis in vivo. Using two sequences for the measurement seems to be superior to using a single sequence.
Quantifying Particle Numbers and Mass Flux in Drifting Snow
NASA Astrophysics Data System (ADS)
Crivelli, Philip; Paterna, Enrico; Horender, Stefan; Lehning, Michael
2016-12-01
We compare two of the most common methods of quantifying mass flux, particle numbers and particle-size distribution for drifting snow events, the snow-particle counter (SPC), a laser-diode-based particle detector, and particle tracking velocimetry based on digital shadowgraphic imaging. The two methods were correlated for mass flux and particle number flux. For the SPC measurements, the device was calibrated by the manufacturer beforehand. The shadowgrapic imaging method measures particle size and velocity directly from consecutive images, and before each new test the image pixel length is newly calibrated. A calibration study with artificially scattered sand particles and glass beads provides suitable settings for the shadowgraphical imaging as well as obtaining a first correlation of the two methods in a controlled environment. In addition, using snow collected in trays during snowfall, several experiments were performed to observe drifting snow events in a cold wind tunnel. The results demonstrate a high correlation between the mass flux obtained for the calibration studies (r ≥slant 0.93) and good correlation for the drifting snow experiments (r ≥slant 0.81). The impact of measurement settings is discussed in order to reliably quantify particle numbers and mass flux in drifting snow. The study was designed and performed to optimize the settings of the digital shadowgraphic imaging system for both the acquisition and the processing of particles in a drifting snow event. Our results suggest that these optimal settings can be transferred to different imaging set-ups to investigate sediment transport processes.
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
Liu, Zhi-jun; Song, Xiao-xia; Xu, Xian-zhu; Tang, Qun
2014-04-18
Nanoparticular MRI contrast agents are rapidly becoming suitable for use in clinical diagnosis. An ideal nanoparticular contrast agent should be endowed with high relaxivity, biocompatibility, proper plasma retention time, and tissue-specific or tumor-targeting imaging. Herein we introduce PEGylated KMnF3 nanoparticles as a new type of T1 contrast agent. Studies showed that the nanoparticular contrast agent revealed high bio-stability with bovine serum albumin in PBS buffer solution, and presented excellent biocompatibility (low cytotoxicity, undetectable hemolysis and hemagglutination). Meanwhile the new contrast agent possessed proper plasma retention time (circulation half-life t1/2 is approximately 2 h) in the body of the administrated mice. It can be delivered into brain vessels and maintained there for hours, and is mostly cleared from the body within 48 h, as demonstrated by time-resolved MRI and Mn-biodistribution analysis. Those distinguishing features make it suitable to obtain contrast-enhanced brain magnetic resonance angiography. Moreover, through the process of passive targeting delivery, the T1 contrast agent clearly illuminates a brain tumor (glioma) with high contrast image and defined shape. This study demonstrates that PEGylated KMnF3 nanoparticles represent a promising biocompatible vascular contrast agent for magnetic resonance angiography and can potentially be further developed into an active targeted tumor MRI contrast agent.
NASA Astrophysics Data System (ADS)
Jun, Won; Kim, Moon S.; Chao, Kaunglin; Lefcourt, Alan M.; Roberts, Michael S.; McNaughton, James L.
2009-05-01
We used a portable hyperspectral fluorescence imaging system to evaluate biofilm formations on four types of food processing surface materials including stainless steel, polypropylene used for cutting boards, and household counter top materials such as formica and granite. The objective of this investigation was to determine a minimal number of spectral bands suitable to differentiate microbial biofilm formation from the four background materials typically used during food processing. Ultimately, the resultant spectral information will be used in development of handheld portable imaging devices that can be used as visual aid tools for sanitation and safety inspection (microbial contamination) of the food processing surfaces. Pathogenic E. coli O157:H7 and Salmonella cells were grown in low strength M9 minimal medium on various surfaces at 22 +/- 2 °C for 2 days for biofilm formation. Biofilm autofluorescence under UV excitation (320 to 400 nm) obtained by hyperspectral fluorescence imaging system showed broad emissions in the blue-green regions of the spectrum with emission maxima at approximately 480 nm for both E. coli O157:H7 and Salmonella biofilms. Fluorescence images at 480 nm revealed that for background materials with near-uniform fluorescence responses such as stainless steel and formica cutting board, regardless of the background intensity, biofilm formation can be distinguished. This suggested that a broad spectral band in the blue-green regions can be used for handheld imaging devices for sanitation inspection of stainless, cutting board, and formica surfaces. The non-uniform fluorescence responses of granite make distinctions between biofilm and background difficult. To further investigate potential detection of the biofilm formations on granite surfaces with multispectral approaches, principal component analysis (PCA) was performed using the hyperspectral fluorescence image data. The resultant PCA score images revealed distinct contrast between biofilms and granite surfaces. This investigation demonstrated that biofilm formations on food processing surfaces, even for background materials with heterogeneous fluorescence responses, can be detected. Furthermore, a multispectral approach in developing handheld inspection devices may be needed to inspect surface materials that exhibit non-uniform fluorescence.
Jensen, Chad D; Duraccio, Kara M; Barnett, Kimberly A; Stevens, Kimberly S
2016-12-01
Research examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents. Three hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image. Adolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents. With some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.
Design of an Airborne L-Band Cross-Track Scanning Scatterometer
NASA Technical Reports Server (NTRS)
Hilliard, Lawrence M. (Technical Monitor)
2002-01-01
In this report, we describe the design of an airborne L-band cross-track scanning scatterometer suitable for airborne operation aboard the NASA P-3 aircraft. The scatterometer is being designed for joint operation with existing L-band radiometers developed by NASA for soil moisture and ocean salinity remote sensing. In addition, design tradeoffs for a space-based radar system have been considered, with particular attention given to antenna architectures suitable for sharing the antenna between the radar and radiometer. During this study, we investigated a number of imaging techniques, including the use of real and synthetic aperture processing in both the along track and cross-track dimensions. The architecture selected will permit a variety of beamforming algorithms to be implemented, although real aperture processing, with hardware beamforming, provides better sidelobe suppression than synthetic array processing and superior signal-to-noise performance. In our discussions with the staff of NASA GSFC, we arrived at an architecture that employs complete transmit/receive modules for each subarray. Amplitude and phase control at each of the transmit modules will allow a low-sidelobe transmit pattern to be generated over scan angles of +/- 50 degrees. Each receiver module will include all electronics necessary to downconvert the received signal to an IF offset of 30 MHz where it will be digitized for further processing.
Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition
NASA Astrophysics Data System (ADS)
Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.
2015-02-01
An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.
Effect of annealing temperature on physical properties of solution processed nickel oxide thin films
NASA Astrophysics Data System (ADS)
Sahoo, Pooja; Thangavel, R.
2018-05-01
In this report, NiO thin films were prepared at different annealing temperatures from nickel acetate precursor by sol-gel spin coating method. These films were characterized by different analytical techniques to obtain their structural, optical morphological and electrical properties using X-ray diffractometer (XRD), Field emission scanning electron microscopy (FESEM), UV-Vis NIR double beam spectrophotometer and Keithley 2450 source meter respectively. FESEM images clearly indicates the formation of a homogenous and porous films. Due to their porosity, they can be used in sensing applications. The optical absorption spectra elucidated that the films are highly transparent and have a suitable band gap which are in similar agreement with earlier reports. The current enhancement under illumination shows the suitability of nanostructured NiO thin films in its application in photovoltaics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishii, H.; Fujino, H.; Bian, Z.
In this study, two types of marker-based tracking methods for Augmented Reality have been developed. One is a method which employs line-shaped markers and the other is a method which employs circular-shaped markers. These two methods recognize the markers by means of image processing and calculate the relative position and orientation between the markers and the camera in real time. The line-shaped markers are suitable to be pasted in the buildings such as NPPs where many pipes and tanks exist. The circular-shaped markers are suitable for the case that there are many obstacles and it is difficult to use line-shapedmore » markers because the obstacles hide the part of the line-shaped markers. Both methods can extend the maximum distance between the markers and the camera compared to the legacy marker-based tracking methods. (authors)« less
Leithner, Doris; Mahmoudi, Scherwin; Wichmann, Julian L; Martin, Simon S; Lenga, Lukas; Albrecht, Moritz H; Booz, Christian; Arendt, Christophe T; Beeres, Martin; D'Angelo, Tommaso; Bodelle, Boris; Vogl, Thomas J; Scholtz, Jan-Erik
2018-02-01
To investigate the impact of traditional (VMI) and noise-optimized virtual monoenergetic imaging (VMI+) algorithms on quantitative and qualitative image quality, and the assessment of stenosis in carotid and intracranial dual-energy CTA (DE-CTA). DE-CTA studies of 40 patients performed on a third-generation 192-slice dual-source CT scanner were included in this retrospective study. 120-kVp image-equivalent linearly-blended, VMI and VMI+ series were reconstructed. Quantitative analysis included evaluation of contrast-to-noise ratios (CNR) of the aorta, common carotid artery, internal carotid artery, middle cerebral artery, and basilar artery. VMI and VMI+ with highest CNR, and linearly-blended series were rated qualitatively. Three radiologists assessed artefacts and suitability for evaluation at shoulder height, carotid bifurcation, siphon, and intracranial using 5-point Likert scales. Detection and grading of stenosis were performed at carotid bifurcation and siphon. Highest CNR values were observed for 40-keV VMI+ compared to 65-keV VMI and linearly-blended images (P < 0.001). Artefacts were low in all qualitatively assessed series with excellent suitability for supraaortic artery evaluation at shoulder and bifurcation height. Suitability was significantly higher in VMI+ and VMI compared to linearly-blended images for intracranial and ICA assessment (P < 0.002). VMI and VMI+ showed excellent accordance for detection and grading of stenosis at carotid bifurcation and siphon with no differences in diagnostic performance. 40-keV VMI+ showed improved quantitative image quality compared to 65-keV VMI and linearly-blended series in supraaortic DE-CTA. VMI and VMI+ provided increased suitability for carotid and intracranial artery evaluation with excellent assessment of stenosis, but did not translate into increased diagnostic performance. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
Raman Hyperspectral Imaging for Detection of Watermelon Seeds Infected with Acidovorax citrulli.
Lee, Hoonsoo; Kim, Moon S; Qin, Jianwei; Park, Eunsoo; Song, Yu-Rim; Oh, Chang-Sik; Cho, Byoung-Kwan
2017-09-23
The bacterial infection of seeds is one of the most important quality factors affecting yield. Conventional detection methods for bacteria-infected seeds, such as biological, serological, and molecular tests, are not feasible since they require expensive equipment, and furthermore, the testing processes are also time-consuming. In this study, we use the Raman hyperspectral imaging technique to distinguish bacteria-infected seeds from healthy seeds as a rapid, accurate, and non-destructive detection tool. We utilize Raman hyperspectral imaging data in the spectral range of 400-1800 cm -1 to determine the optimal band-ratio for the discrimination of watermelon seeds infected by the bacteria Acidovorax citrulli using ANOVA. Two bands at 1076.8 cm -1 and 437 cm -1 are selected as the optimal Raman peaks for the detection of bacteria-infected seeds. The results demonstrate that the Raman hyperspectral imaging technique has a good potential for the detection of bacteria-infected watermelon seeds and that it could form a suitable alternative to conventional methods.
Joint transform correlators with spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Bykovsky, Yuri A.; Karpiouk, Andrey B.; Markilov, Anatoly A.; Rodin, Vladislav G.; Starikov, Sergey N.
1997-03-01
Two variants of joint transform correlators with monochromatic spatially incoherent illumination are considered. The Fourier-holograms of the reference and recognized images are recorded simultaneously or apart in a time on the same spatial light modulator directly by monochromatic spatially incoherent light. To create the signal of mutual correlation of the images it is necessary to execute nonlinear transformation when the hologram is illuminated by coherent light. In the first scheme of the correlator this aim was achieved by using double pas of a restoring coherent wave through the hologram. In the second variant of the correlator the non-linearity of the characteristic of the spatial light modulator for hologram recording was used. Experimental schemes and results on processing teste images by both variants of joint transform correlators with monochromatic spatially incoherent illumination. The use of spatially incoherent light on the input of joint transform correlators permits to reduce the requirements to optical quality of elements, to reduce accuracy requirements on elements positioning and to expand a number of devices suitable to input images in correlators.
Raman Hyperspectral Imaging for Detection of Watermelon Seeds Infected with Acidovorax citrulli
Lee, Hoonsoo; Kim, Moon S.; Qin, Jianwei; Park, Eunsoo; Song, Yu-Rim; Oh, Chang-Sik
2017-01-01
The bacterial infection of seeds is one of the most important quality factors affecting yield. Conventional detection methods for bacteria-infected seeds, such as biological, serological, and molecular tests, are not feasible since they require expensive equipment, and furthermore, the testing processes are also time-consuming. In this study, we use the Raman hyperspectral imaging technique to distinguish bacteria-infected seeds from healthy seeds as a rapid, accurate, and non-destructive detection tool. We utilize Raman hyperspectral imaging data in the spectral range of 400–1800 cm−1 to determine the optimal band-ratio for the discrimination of watermelon seeds infected by the bacteria Acidovorax citrulli using ANOVA. Two bands at 1076.8 cm−1 and 437 cm−1 are selected as the optimal Raman peaks for the detection of bacteria-infected seeds. The results demonstrate that the Raman hyperspectral imaging technique has a good potential for the detection of bacteria-infected watermelon seeds and that it could form a suitable alternative to conventional methods. PMID:28946608
Generative diffeomorphic modelling of large MRI data sets for probabilistic template construction.
Blaiotta, Claudia; Freund, Patrick; Cardoso, M Jorge; Ashburner, John
2018-02-01
In this paper we present a hierarchical generative model of medical image data, which can capture simultaneously the variability of both signal intensity and anatomical shapes across large populations. Such a model has a direct application for learning average-shaped probabilistic tissue templates in a fully automated manner. While in principle the generality of the proposed Bayesian approach makes it suitable to address a wide range of medical image computing problems, our work focuses primarily on neuroimaging applications. In particular we validate the proposed method on both real and synthetic brain MR scans including the cervical cord and demonstrate that it yields accurate alignment of brain and spinal cord structures, as compared to state-of-the-art tools for medical image registration. At the same time we illustrate how the resulting tissue probability maps can readily be used to segment, bias correct and spatially normalise unseen data, which are all crucial pre-processing steps for MR imaging studies. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Streak detection and analysis pipeline for space-debris optical images
NASA Astrophysics Data System (ADS)
Virtanen, Jenni; Poikonen, Jonne; Säntti, Tero; Komulainen, Tuomo; Torppa, Johanna; Granvik, Mikael; Muinonen, Karri; Pentikäinen, Hanna; Martikainen, Julia; Näränen, Jyri; Lehti, Jussi; Flohrer, Tim
2016-04-01
We describe a novel data-processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data, to support the development and validation of population models and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. The ESA-funded StreakDet (streak detection and astrometric reduction) activity has aimed at formulating and discussing suitable approaches for the detection and astrometric reduction of object trails, or streaks, in optical observations. Our two main focuses are objects in lower altitudes and space-based observations (i.e., high angular velocities), resulting in long (potentially curved) and faint streaks in the optical images. In particular, we concentrate on single-image (as compared to consecutive frames of the same field) and low-SNR detection of objects. Particular attention has been paid to the process of extraction of all necessary information from one image (segmentation), and subsequently, to efficient reduction of the extracted data (classification). We have developed an automated streak detection and processing pipeline and demonstrated its performance with an extensive database of semisynthetic images simulating streak observations both from ground-based and space-based observing platforms. The average processing time per image is about 13 s for a typical 2k-by-2k image. For long streaks (length >100 pixels), primary targets of the pipeline, the detection sensitivity (true positives) is about 90% for both scenarios for the bright streaks (SNR > 1), while in the low-SNR regime, the sensitivity is still 50% at SNR = 0.5 .
Note: Simple hysteresis parameter inspector for camera module with liquid lens
NASA Astrophysics Data System (ADS)
Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung
2010-05-01
A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.
On a Chirplet Transform Based Method for Co-channel Voice Separation
NASA Astrophysics Data System (ADS)
Dugnol, B.; Fernández, C.; Galiano, G.; Velasco, J.
We use signal and image theory based algorithms to produce estimations of the number of wolves emitting howls or barks in a given field recording as an individuals counting alternative to the traditional trace collecting methodologies. We proceed in two steps. Firstly, we clean and enhance the signal by using PDE based image processing algorithms applied to the signal spectrogram. Secondly, assuming that the wolves chorus may be modelled as an addition of nonlinear chirps, we use the quadratic energy distribution corresponding to the Chirplet Transform of the signal to produce estimates of the corresponding instantaneous frequencies, chirp-rates and amplitudes at each instant of the recording. We finally establish suitable criteria to decide how such estimates are connected in time.
Review on Microstructure Analysis of Metals and Alloys Using Image Analysis Techniques
NASA Astrophysics Data System (ADS)
Rekha, Suganthini; Bupesh Raja, V. K.
2017-05-01
The metals and alloys find vast application in engineering and domestic sectors. The mechanical properties of the metals and alloys are influenced by their microstructure. Hence the microstructural investigation is very critical. Traditionally the microstructure is studied using optical microscope with suitable metallurgical preparation. The past few decades the computers are applied in the capture and analysis of the optical micrographs. The advent of computer softwares like digital image processing and computer vision technologies are a boon to the analysis of the microstructure. In this paper the literature study of the various developments in the microstructural analysis, is done. The conventional optical microscope is complemented by the use of Scanning Electron Microscope (SEM) and other high end equipments.
Smart Cameras for Remote Science Survey
NASA Technical Reports Server (NTRS)
Thompson, David R.; Abbey, William; Allwood, Abigail; Bekker, Dmitriy; Bornstein, Benjamin; Cabrol, Nathalie A.; Castano, Rebecca; Estlin, Tara; Fuchs, Thomas; Wagstaff, Kiri L.
2012-01-01
Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels - distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.
Field-Portable Pixel Super-Resolution Colour Microscope
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742
An Evaluation of Feature Learning Methods for High Resolution Image Classification
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Montoya, J.; Schindler, K.
2012-07-01
Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.
Field-portable pixel super-resolution colour microscope.
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.
Retinal image quality assessment based on image clarity and content
NASA Astrophysics Data System (ADS)
Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim
2016-09-01
Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.
NASA Astrophysics Data System (ADS)
Cao, Ning; Liang, Xuwei; Zhuang, Qi; Zhang, Jun
2009-02-01
Magnetic Resonance Imaging (MRI) techniques have achieved much importance in providing visual and quantitative information of human body. Diffusion MRI is the only non-invasive tool to obtain information of the neural fiber networks of the human brain. The traditional Diffusion Tensor Imaging (DTI) is only capable of characterizing Gaussian diffusion. High Angular Resolution Diffusion Imaging (HARDI) extends its ability to model more complex diffusion processes. Spherical harmonic series truncated to a certain degree is used in recent studies to describe the measured non-Gaussian Apparent Diffusion Coefficient (ADC) profile. In this study, we use the sampling theorem on band-limited spherical harmonics to choose a suitable degree to truncate the spherical harmonic series in the sense of Signal-to-Noise Ratio (SNR), and use Monte Carlo integration to compute the spherical harmonic transform of human brain data obtained from icosahedral schema.
NASA Technical Reports Server (NTRS)
Moses, J. Daniel
1989-01-01
Three improvements in photographic x-ray imaging techniques for solar astronomy are presented. The testing and calibration of a new film processor was conducted; the resulting product will allow photometric development of sounding rocket flight film immediately upon recovery at the missile range. Two fine grained photographic films were calibrated and flight tested to provide alternative detector choices when the need for high resolution is greater than the need for high sensitivity. An analysis technique used to obtain the characteristic curve directly from photographs of UV solar spectra were applied to the analysis of soft x-ray photographic images. The resulting procedure provides a more complete and straightforward determination of the parameters describing the x-ray characteristic curve than previous techniques. These improvements fall into the category of refinements instead of revolutions, indicating the fundamental suitability of the photographic process for x-ray imaging in solar astronomy.
Selecting a digital camera for telemedicine.
Patricoski, Chris; Ferguson, A Stewart
2009-06-01
The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.
SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bednarz, B; Culberson, W; Bassetti, M
Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-timemore » from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298.« less
Computerized tomography using video recorded fluoroscopic images
NASA Technical Reports Server (NTRS)
Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.
1975-01-01
A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.
The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences
NASA Astrophysics Data System (ADS)
Schwalbe, Ellen; Maas, Hans-Gerd
2017-12-01
This paper presents a comprehensive method for the determination of glacier surface motion vector fields at high spatial and temporal resolution. These vector fields can be derived from monocular terrestrial camera image sequences and are a valuable data source for glaciological analysis of the motion behaviour of glaciers. The measurement concepts for the acquisition of image sequences are presented, and an automated monoscopic image sequence processing chain is developed. Motion vector fields can be derived with high precision by applying automatic subpixel-accuracy image matching techniques on grey value patterns in the image sequences. Well-established matching techniques have been adapted to the special characteristics of the glacier data in order to achieve high reliability in automatic image sequence processing, including the handling of moving shadows as well as motion effects induced by small instabilities in the camera set-up. Suitable geo-referencing techniques were developed to transform image measurements into a reference coordinate system.The result of monoscopic image sequence analysis is a dense raster of glacier surface point trajectories for each image sequence. Each translation vector component in these trajectories can be determined with an accuracy of a few centimetres for points at a distance of several kilometres from the camera. Extensive practical validation experiments have shown that motion vector and trajectory fields derived from monocular image sequences can be used for the determination of high-resolution velocity fields of glaciers, including the analysis of tidal effects on glacier movement, the investigation of a glacier's motion behaviour during calving events, the determination of the position and migration of the grounding line and the detection of subglacial channels during glacier lake outburst floods.
A low-power small-area ADC array for IRFPA readout
NASA Astrophysics Data System (ADS)
Zhong, Shengyou; Yao, Libin
2013-09-01
The readout integrated circuit (ROIC) is a bridge between the infrared focal plane array (IRFPA) and image processing circuit in an infrared imaging system. The ROIC is the first part of signal processing circuit and connected to detectors directly, so its performance will greatly affect the detector or even the whole imaging system performance. With the development of CMOS technologies, it's possible to digitalize the signal inside the ROIC and develop the digital ROIC. Digital ROIC can reduce complexity of the whole system and improve the system reliability. More importantly, it can accommodate variety of digital signal processing techniques which the traditional analog ROIC cannot achieve. The analog to digital converter (ADC) is the most important building block in the digital ROIC. The requirements for ADCs inside the ROIC are low power, high dynamic range and small area. In this paper we propose an RC hybrid Successive Approximation Register (SAR) ADC as the column ADC for digital ROIC. In our proposed ADC structure, a resistor ladder is used to generate several voltages. The proposed RC hybrid structure not only reduces the area of capacitor array but also releases requirement for capacitor array matching. Theory analysis and simulation show RC hybrid SAR ADC is suitable for ADC array applications
Flexible ultrathin-body single-photon avalanche diode sensors and CMOS integration.
Sun, Pengfei; Ishihara, Ryoichi; Charbon, Edoardo
2016-02-22
We proposed the world's first flexible ultrathin-body single-photon avalanche diode (SPAD) as photon counting device providing a suitable solution to advanced implantable bio-compatible chronic medical monitoring, diagnostics and other applications. In this paper, we investigate the Geiger-mode performance of this flexible ultrathin-body SPAD comprehensively and we extend this work to the first flexible SPAD image sensor with in-pixel and off-pixel electronics integrated in CMOS. Experimental results show that dark count rate (DCR) by band-to-band tunneling can be reduced by optimizing multiplication doping. DCR by trap-assisted avalanche, which is believed to be originated from the trench etching process, could be further reduced, resulting in a DCR density of tens to hundreds of Hertz per micrometer square at cryogenic temperature. The influence of the trench etching process onto DCR is also proved by comparison with planar ultrathin-body SPAD structures without trench. Photon detection probability (PDP) can be achieved by wider depletion and drift regions and by carefully optimizing body thickness. PDP in frontside- (FSI) and backside-illumination (BSI) are comparable, thus making this technology suitable for both modes of illumination. Afterpulsing and crosstalk are negligible at 2µs dead time, while it has been proved, for the first time, that a CMOS SPAD pixel of this kind could work in a cryogenic environment. By appropriate choice of substrate, this technology is amenable to implantation for biocompatible photon-counting applications and wherever bended imaging sensors are essential.
Geometric Characterization of Multi-Axis Multi-Pinhole SPECT
DiFilippo, Frank P.
2008-01-01
A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574
Evaluation of Sparfloxacin Distribution by Mass Spectrometry Imaging in a Phototoxicity Model
NASA Astrophysics Data System (ADS)
Boudon, Stéphanie Marie; Morandi, Grégory; Prideaux, Brendan; Staab, Dieter; Junker, Ursula; Odermatt, Alex; Stoeckli, Markus; Bauer, Daniel
2014-10-01
Mass spectrometry imaging (MSI) was applied to samples from mouse skin and from a human in vitro 3D skin model in order to assess its suitability in the context of photosafety evaluation. MSI proved to be a suitable method for the detection of the model compound sparfloxacin in biological tissues following systemic administration (oral gavage, 100 mg/kg) and subsequent exposure to simulated sunlight. In the human in vitro 3D skin model, a concentration-dependent increase as well as an irradiation-dependent decrease of sparfloxacin was observed. The MSI data on samples from mouse skin showed high signals of sparfloxacin 8 h after dosing. In contrast, animals irradiated with simulated sunlight showed significantly lower signals for sparfloxacin starting already at 1 h postirradiation, with no measurable intensity at the later time points (3 h and 6 h), suggesting a time- and irradiation-dependent degradation of sparfloxacin. The acquisition resolution of 100 μm proved to be adequate for the visualization of the distribution of sparfloxacin in the gross ear tissue samples, but distinct skin compartments were unable to be resolved. The label-free detection of intact sparfloxacin was only the first step in an attempt to gain a deeper understanding of the phototoxic processes. Further work is needed to identify the degradation products of sparfloxacin implicated in the observed inflammatory processes in order to better understand the origin and the mechanism of the phototoxic reaction.
Real-time restoration of white-light confocal microscope optical sections
Balasubramanian, Madhusudhanan; Iyengar, S. Sitharama; Beuerman, Roger W.; Reynaud, Juan; Wolenski, Peter
2009-01-01
Confocal microscopes (CM) are routinely used for building 3-D images of microscopic structures. Nonideal imaging conditions in a white-light CM introduce additive noise and blur. The optical section images need to be restored prior to quantitative analysis. We present an adaptive noise filtering technique using Karhunen–Loéve expansion (KLE) by the method of snapshots and a ringing metric to quantify the ringing artifacts introduced in the images restored at various iterations of iterative Lucy–Richardson deconvolution algorithm. The KLE provides a set of basis functions that comprise the optimal linear basis for an ensemble of empirical observations. We show that most of the noise in the scene can be removed by reconstructing the images using the KLE basis vector with the largest eigenvalue. The prefiltering scheme presented is faster and does not require prior knowledge about image noise. Optical sections processed using the KLE prefilter can be restored using a simple inverse restoration algorithm; thus, the methodology is suitable for real-time image restoration applications. The KLE image prefilter outperforms the temporal-average prefilter in restoring CM optical sections. The ringing metric developed uses simple binary morphological operations to quantify the ringing artifacts and confirms with the visual observation of ringing artifacts in the restored images. PMID:20186290
2012-11-08
S48-E-013 (15 Sept 1991) --- The Upper Atmosphere Research Satellite (UARS) in the payload bay of the earth- orbiting Discovery. UARS is scheduled for deploy on flight day three of the STS-48 mission. Data from UARS will enable scientists to study ozone depletion in the stratosphere, or upper atmosphere. This image was transmitted by the Electronic Still Camera (ESC), Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE.
Sliding Window-Based Region of Interest Extraction for Finger Vein Images
Yang, Lu; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2013-01-01
Region of Interest (ROI) extraction is a crucial step in an automatic finger vein recognition system. The aim of ROI extraction is to decide which part of the image is suitable for finger vein feature extraction. This paper proposes a finger vein ROI extraction method which is robust to finger displacement and rotation. First, we determine the middle line of the finger, which will be used to correct the image skew. Then, a sliding window is used to detect the phalangeal joints and further to ascertain the height of ROI. Last, for the corrective image with certain height, we will obtain the ROI by using the internal tangents of finger edges as the left and right boundary. The experimental results show that the proposed method can extract ROI more accurately and effectively compared with other methods, and thus improve the performance of finger vein identification system. Besides, to acquire the high quality finger vein image during the capture process, we propose eight criteria for finger vein capture from different aspects and these criteria should be helpful to some extent for finger vein capture. PMID:23507824
Meckes, Brian; Arce, Fernando Teran; Connelly, Laura S.; Lal, Ratnesh
2014-01-01
Biological membranes contain ion channels, which are nanoscale pores allowing controlled ionic transport and mediating key biological functions underlying normal/abnormal living. Synthetic membranes with defined pores are being developed to control various processes, including filtration of pollutants, charge transport for energy storage, and separation of fluids and molecules. Although ionic transport (currents) can be measured with single channel resolution, imaging their structure and ionic currents simultaneously is difficult. Atomic force microscopy enables high resolution imaging of nanoscale structures and can be modified to measure ionic currents simultaneously. Moreover, the ionic currents can also be used to image structures. A simple method for fabricating conducting AFM cantilevers to image pore structures at high resolution is reported. Tungsten microwires with nanoscale tips are insulated except at the apex. This allows simultaneous imaging via cantilever deflections in normal AFM force feedback mode as well as measuring localized ionic currents. These novel probes measure ionic currents as small as picoampere while providing nanoscale spatial resolution surface topography and is suitable for measuring ionic currents and conductance of biological ion channels. PMID:24663394
The Ilac-Project Supporting Ancient Coin Classification by Means of Image Analysis
NASA Astrophysics Data System (ADS)
Kavelar, A.; Zambanini, S.; Kampel, M.; Vondrovec, K.; Siegl, K.
2013-07-01
This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.
Strategies for the Segmentation of Subcutaneous Vascular Patterns in Thermographic Images
NASA Astrophysics Data System (ADS)
Chan, Eric K. Y.; Pearce, John A.
1989-05-01
Computer-assisted segmentation of vascular patterns in thermographic images provides the clinician with graphic outlines of thermally significant subcutaneous blood vessels. Segmentation strategies compared here consist of image smoothing protocols followed by thresholding and zero-crossing edge detectors. Median prefiltering followed by the Frei-Chen algorithm gave the most reproducible results, with an execution time of 143 seconds for 256 X 256 images. The Laplacian of Gaussian operator was not suitable due to streak artifacts in the thermographic imaging system. This computerized process may be adopted in a fast paced clinical environment to aid in the diagnosis and assessment of peripheral circulatory diseases, Raynaud's Disease3, phlebitis, varicose veins, as well as diseases of the autonomic nervous system. The same methodology may be applied to enhance the appearance of abnormal breast vascular patterns, and hence serve as an adjunct to mammography in the diagnosis of breast cancer. The automatically segmented vascular patterns, which have a hand drawn appearance, may also be used as a data reduction precursor to higher level pattern analysis and classification tasks.
Rizk, Aurélien; Paul, Grégory; Incardona, Pietro; Bugarski, Milica; Mansouri, Maysam; Niemann, Axel; Ziegler, Urs; Berger, Philipp; Sbalzarini, Ivo F
2014-03-01
Detection and quantification of fluorescently labeled molecules in subcellular compartments is a key step in the analysis of many cell biological processes. Pixel-wise colocalization analyses, however, are not always suitable, because they do not provide object-specific information, and they are vulnerable to noise and background fluorescence. Here we present a versatile protocol for a method named 'Squassh' (segmentation and quantification of subcellular shapes), which is used for detecting, delineating and quantifying subcellular structures in fluorescence microscopy images. The workflow is implemented in freely available, user-friendly software. It works on both 2D and 3D images, accounts for the microscope optics and for uneven image background, computes cell masks and provides subpixel accuracy. The Squassh software enables both colocalization and shape analyses. The protocol can be applied in batch, on desktop computers or computer clusters, and it usually requires <1 min and <5 min for 2D and 3D images, respectively. Basic computer-user skills and some experience with fluorescence microscopy are recommended to successfully use the protocol.
Research and applications of infrared thermal imaging systems suitable for developing countries
NASA Astrophysics Data System (ADS)
Weili, Zhang; Danyu, Cai
1986-01-01
It is a common situation in most developing countries that the utilization ratio of the sources of energy is low, the reliability service of equipment is poor, the cost of installation maintenance is high, the loss due to conflagration is heavy, and so on. Therefore, they are in urgent need of using infrared thermal imaging technique to improve their energy saving, equipment diagnosis as well as fire searching. But the infrared thermal imaging systems in the world market so far are not suitable for their use. This paper summarizes the research on two dimensional real time infrared thermal imaging systems on the basis of electron beam scanning and pyroelectric detection, as well as their applications in industry in China.
NASA Astrophysics Data System (ADS)
Basavarajappa, T. H.
2012-07-01
Landfill site selection is a complex process involving geological, hydrological, environmental and technical parameters as well as government regulations. As such, it requires the processing of a good amount of geospatial data. Landfill site selection techniques have been analyzed for identifying their suitability. Application of Geographic Information System (GIS) is suitable to find best locations for such installations which use multiple criteria analysis. The use of Artificial intelligence methods, such as expert systems, can also be very helpful in solid waste planning and management. The waste disposal and its pollution around major cities in Karnataka are important problems affecting the environment. The Mysore is one of the major cities in Karnataka. The landfill site selection is the best way to control of pollution from any region. The main aim is to develop geographic information system to study the Landuse/ Landcover, natural drainage system, water bodies, and extents of villages around Mysore city, transportation, topography, geomorphology, lithology, structures, vegetation and forest information for landfill site selection. GIS combines spatial data (maps, aerial photographs, and satellite images) with quantitative, qualitative, and descriptive information database, which can support a wide range of spatial queries. For the Site Selection of an industrial waste and normal daily urban waste of a city town or a village, combining GIS with Analytical Hierarchy Process (AHP) will be more appropriate. This method is innovative because it establishes general indices to quantify overall environmental impact as well as individual indices for specific environmental components (i.e. surface water, groundwater, atmosphere, soil and human health). Since this method requires processing large quantities of spatial data. To automate the processes of establishing composite evaluation criteria, performing multiple criteria analysis and carrying out spatial clustering a suitable methodology was developed. The feasibility of site selection in the study area based on different criteria was used to obtain the layered data by integrating Remote Sensing and GIS. This methodology is suitable for all practical applications in other cities, also.
[INVITED] Evaluation of process observation features for laser metal welding
NASA Astrophysics Data System (ADS)
Tenner, Felix; Klämpfl, Florian; Nagulin, Konstantin Yu.; Schmidt, Michael
2016-06-01
In the present study we show how fast the fluid dynamics change when changing the laser power for different feed rates during laser metal welding. By the use of two high-speed cameras and a data acquisition system we conclude how fast we have to image the process to measure the fluid dynamics with a very high certainty. Our experiments show that not all process features which can be measured during laser welding do represent the process behavior similarly well. Despite the good visibility of the vapor plume the monitoring of its movement is less suitable as an input signal for a closed-loop control. The features measured inside the keyhole show a good correlation with changes of process parameters. Due to its low noise, the area of the keyhole opening is well suited as an input signal for a closed-loop control of the process.
Sroka-Bartnicka, Anna; Kimber, James A; Borkowski, Leszek; Pawlowska, Marta; Polkowska, Izabela; Kalisz, Grzegorz; Belcarz, Anna; Jozwiak, Krzysztof; Ginalska, Grazyna; Kazarian, Sergei G
2015-10-01
The spectroscopic approaches of FTIR imaging and Raman mapping were applied to the characterisation of a new carbon hydroxyapatite/β-glucan composite developed for bone tissue engineering. The composite is an artificial bone material with an apatite-forming ability for the bone repair process. Rabbit bone samples were tested with an implanted bioactive material for a period of several months. Using spectroscopic and chemometric methods, we were able to determine the presence of amides and phosphates and the distribution of lipid-rich domains in the bone tissue, providing an assessment of the composite's bioactivity. Samples were also imaged in transmission using an infrared microscope combined with a focal plane array detector. CaF2 lenses were also used on the infrared microscope to improve spectral quality by reducing scattering artefacts, improving chemometric analysis. The presence of collagen and lipids at the bone/composite interface confirmed biocompatibility and demonstrate the suitability of FTIR microscopic imaging with lenses in studying these samples. It confirmed that the composite is a very good background for collagen growth and increases collagen maturity with the time of the bone growth process. The results indicate the bioactive and biocompatible properties of this composite and demonstrate how Raman and FTIR spectroscopic imaging have been used as an effective tool for tissue characterisation.
NASA Astrophysics Data System (ADS)
Schulze, Martin H.; Heuer, Henning
2012-04-01
Carbon fiber based materials are used in many lightweight applications in aeronautical, automotive, machine and civil engineering application. By the increasing automation in the production process of CFRP laminates a manual optical inspection of each resin transfer molding (RTM) layer is not practicable. Due to the limitation to surface inspection, the quality parameters of multilayer 3 dimensional materials cannot be observed by optical systems. The Imaging Eddy- Current (EC) NDT is the only suitable inspection method for non-resin materials in the textile state that allows an inspection of surface and hidden layers in parallel. The HF-ECI method has the capability to measure layer displacements (misaligned angle orientations) and gap sizes in a multilayer carbon fiber structure. EC technique uses the variation of the electrical conductivity of carbon based materials to obtain material properties. Beside the determination of textural parameters like layer orientation and gap sizes between rovings, the detection of foreign polymer particles, fuzzy balls or visualization of undulations can be done by the method. For all of these typical parameters an imaging classification process chain based on a high resolving directional ECimaging device named EddyCus® MPECS and a 2D-FFT with adapted preprocessing algorithms are developed.
Real-time image processing for passive mmW imagery
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.
2015-05-01
The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
A robust embedded vision system feasible white balance algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuan; Yu, Feihong
2018-01-01
White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.
NASA Astrophysics Data System (ADS)
Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul
2018-04-01
Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.
syris: a flexible and efficient framework for X-ray imaging experiments simulation.
Faragó, Tomáš; Mikulík, Petr; Ershov, Alexey; Vogelgesang, Matthias; Hänschke, Daniel; Baumbach, Tilo
2017-11-01
An open-source framework for conducting a broad range of virtual X-ray imaging experiments, syris, is presented. The simulated wavefield created by a source propagates through an arbitrary number of objects until it reaches a detector. The objects in the light path and the source are time-dependent, which enables simulations of dynamic experiments, e.g. four-dimensional time-resolved tomography and laminography. The high-level interface of syris is written in Python and its modularity makes the framework very flexible. The computationally demanding parts behind this interface are implemented in OpenCL, which enables fast calculations on modern graphics processing units. The combination of flexibility and speed opens new possibilities for studying novel imaging methods and systematic search of optimal combinations of measurement conditions and data processing parameters. This can help to increase the success rates and efficiency of valuable synchrotron beam time. To demonstrate the capabilities of the framework, various experiments have been simulated and compared with real data. To show the use case of measurement and data processing parameter optimization based on simulation, a virtual counterpart of a high-speed radiography experiment was created and the simulated data were used to select a suitable motion estimation algorithm; one of its parameters was optimized in order to achieve the best motion estimation accuracy when applied on the real data. syris was also used to simulate tomographic data sets under various imaging conditions which impact the tomographic reconstruction accuracy, and it is shown how the accuracy may guide the selection of imaging conditions for particular use cases.
Detection and display of acoustic window for guiding and training cardiac ultrasound users
NASA Astrophysics Data System (ADS)
Huang, Sheng-Wen; Radulescu, Emil; Wang, Shougang; Thiele, Karl; Prater, David; Maxwell, Douglas; Rafter, Patrick; Dupuy, Clement; Drysdale, Jeremy; Erkamp, Ramon
2014-03-01
Successful ultrasound data collection strongly relies on the skills of the operator. Among different scans, echocardiography is especially challenging as the heart is surrounded by ribs and lung tissue. Less experienced users might acquire compromised images because of suboptimal hand-eye coordination and less awareness of artifacts. Clearly, there is a need for a tool that can guide and train less experienced users to position the probe optimally. We propose to help users with hand-eye coordination by displaying lines overlaid on B-mode images. The lines indicate the edges of blockages (e.g., ribs) and are updated in real time according to movement of the probe relative to the blockages. They provide information about how probe positioning can be improved. To distinguish between blockage and acoustic window, we use coherence, an indicator of channel data similarity after applying focusing delays. Specialized beamforming was developed to estimate coherence. Image processing is applied to coherence maps to detect unblocked beams and the angle of the lines for display. We built a demonstrator based on a Philips iE33 scanner, from which beamsummed RF data and video output are transferred to a workstation for processing. The detected lines are overlaid on B-mode images and fed back to the scanner display to provide users real-time guidance. Using such information in addition to B-mode images, users will be able to quickly find a suitable acoustic window for optimal image quality, and improve their skill.
Experimental measurement of cooling tower emissions using image processing of sensitive papers
NASA Astrophysics Data System (ADS)
Ruiz, J.; Kaiser, A. S.; Ballesta, M.; Gil, A.; Lucas, M.
2013-04-01
Cooling tower emissions are harmful for several reasons such as air polluting, wetting, icing and solid particle deposition, but mainly due to human health hazards (i.e. Legionella). There are several methods for measuring drift drops. This paper is focussed on the sensitive paper technique, which is suitable in low drift scenarios and real conditions. The lack of an automatic classification method motivated the development of a digital image process algorithm for the Sensitive Paper method. This paper presents a detailed description of this method, in which, drop-like elements are identified by means of the Canny edge detector combined with some morphological operations. Afterwards, the application of a J48 decision tree is proposed as one of the most relevant contributions. This classification method allows us to discern between stains whose origin is a drop and stains whose origin is not a drop. The method is applied to a real case and results are presented in terms of drift and PM10 emissions. This involves the calculation of the main features of the droplet distribution at the cooling tower exit surface in terms of drop size distribution data, cumulative mass distribution curve and characteristic drop diameters. The Log-normal and the Rosin-Rammler distribution functions have been fitted to the experimental data collected in the tests and it can been concluded that the first one is the most suitable for experimental data among the functions tested (whereas the second one is less suitable). Realistic PM10 calculations include the measurement of drift emissions and Total Dissolved Solids as well as the size and number of drops. Results are compared to the method proposed by the U.S. Environmental Protection Agency assessing its overestimation. Drift emissions have found to be 0.0517% of the recirculating water, which is over the Spanish standards limit (0.05%).
NASA Technical Reports Server (NTRS)
Hammoudeh, Mona (Inventor); Flynn, Michael T. (Inventor); Gormly, Sherwin J. (Inventor); Richardson, Tra-My Justine (Inventor)
2017-01-01
A method and associated system for processing waste gases, liquids and solids, produced by human activity, to separate (i) liquids suitable for processing to produce potable water, (ii) solids and liquids suitable for construction of walls suitable for enclosing a habitat volume and for radiation shielding, and (iii) other fluids and solids that are not suitable for processing. A forward osmosis process and a reverse osmosis process are sequentially combined to reduce fouling and to permit accumulation of different processable substances. The invention may be used for long term life support of human activity.
Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.
Punys, Vytenis; Maknickas, Ramunas
2011-01-01
Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.
Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.
2001-01-01
Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.
Development of a universal medical X-ray imaging phantom prototype.
Groenewald, Annemari; Groenewald, Willem A
2016-11-08
Diagnostic X-ray imaging depends on the maintenance of image quality that allows for proper diagnosis of medical conditions. Maintenance of image quality requires quality assurance programs on the various X-ray modalities, which consist of pro-jection radiography (including mobile X-ray units), fluoroscopy, mammography, and computed tomography (CT) scanning. Currently a variety of modality-specific phantoms are used to perform quality assurance (QA) tests. These phantoms are not only expensive, but suitably trained personnel are needed to successfully use them and interpret the results. The question arose as to whether a single universal phantom could be designed and applied to all of the X-ray imaging modalities. A universal phantom would reduce initial procurement cost, possibly reduce the time spent on QA procedures and simplify training of staff on the single device. The aim of the study was to design and manufacture a prototype of a universal phantom, suitable for image quality assurance in general X-rays, fluoroscopy, mammography, and CT scanning. The universal phantom should be easy to use and would enable automatic data analysis, pass/fail reporting, and corrective action recommendation. In addition, a universal phantom would especially be of value in low-income countries where finances and human resources are limited. The design process included a thorough investigation of commercially available phantoms. Image quality parameters necessary for image quality assurance in the different X-ray imaging modalities were determined. Based on information obtained from the above-mentioned investigations, a prototype of a universal phantom was developed, keeping ease of use and reduced cost in mind. A variety of possible phantom housing and insert materials were investigated, considering physical properties, machinability, and cost. A three-dimensional computer model of the first phantom prototype was used to manufacture the prototype housing and inserts. Some of the inserts were 3D-printed, others were machined from different materials. The different components were assembled to form the first prototype of the universal X-ray imaging phantom. The resulting prototype of the universal phantom conformed to the aims of a single phantom for multiple imag-ing modalities, which would be easy to use and manufacture at a reduced cost. A PCT International Patent Application No. PCT/IB2016/051165 has been filed for this technology. © 2016 The Authors.
A new template matching method based on contour information
NASA Astrophysics Data System (ADS)
Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong
2014-11-01
Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.
The automated counting of beating rates in individual cultured heart cells.
Collins, G A; Dower, R; Walker, M J
1981-12-01
The effect of drugs on the beating rate of cultured heart cells can be monitored in a number of ways. The simultaneous automated measurement of beating rates of a number of cells allows drug effects to be rapidly quantified. A photoresistive detector placed on a television image of a cell, when coupled to operational amplifiers, gives binary signals that can be processed by a microprocessor. On this basis, we have devised a system that is capable of simultaneously monitoring the individual beating of six single cultured heart cells. A microprocessor automatically processes data obtained under different experimental conditions and records it in suitable descriptive formats such as dose-response curves and double reciprocal plots.
Neuromorphic vision sensors and preprocessors in system applications
NASA Astrophysics Data System (ADS)
Kramer, Joerg; Indiveri, Giacomo
1998-09-01
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
NASA Technical Reports Server (NTRS)
Feinstein, S. P.; Girard, M. A.
1979-01-01
An automated technique for measuring particle diameters and their spatial coordinates from holographic reconstructions is being developed. Preliminary tests on actual cold-flow holograms of impinging jets indicate that a suitable discriminant algorithm consists of a Fourier-Gaussian noise filter and a contour thresholding technique. This process identifies circular as well as noncircular objects. The desired objects (in this case, circular or possibly ellipsoidal) are then selected automatically from the above set and stored with their parametric representations. From this data, dropsize distributions as a function of spatial coordinates can be generated and combustion effects due to hardware and/or physical variables studied.
Tropical Tropospheric Ozone: New Insights from Remote Sensing and Field Studies
NASA Technical Reports Server (NTRS)
Thompson, Anne
1999-01-01
This talk will summarize our recent research in tropical tropospheric ozone studies in the field and from space. New tropospheric ozone and aerosol products from the TOMS (Total Ozone Mapping Spectrometer) satellite instrument will be highlighted (Hudson and Thompson, 1998; Thompson and Hudson, 1999). These are suitable for studying processes like ozone pollution resulting from biomass fires, seasonal and interannual variations and trends. Archived maps of tropospheric ozone over the tropics, from the Nimbus 7 observing period (1979-1992) are available in digital form at our website. Real-time processing of TOMS data has produced images of tropical tropospheric ozone (TTO) since early 1997, using Earth-Probe TOMS; these maps are also available on the homepage.
[Digital imaging and robotics in endoscopic surgery].
Go, P M
1998-05-23
The introduction of endoscopical surgery has among other things influenced technical developments in surgery. Owing to digitalisation, major progress will be made in imaging and in the sophisticated technology sometimes called robotics. Digital storage makes the results of imaging diagnostics (e.g. the results of radiological examination) suitable for transmission via video conference systems for telediagnostic purposes. The availability of digital video technique renders possible the processing, storage and retrieval of moving images as well. During endoscopical operations use may be made of a robot arm which replaces the camera man. The arm does not grow tired and provides a stable image. The surgeon himself can operate or address the arm and it can remember fixed image positions to which it can return if ordered to do so. The next step is to carry out surgical manipulations via a robot arm. This may make operations more patient-friendly. A robot arm can also have remote control: telerobotics. At the Internet site of this journal a number of supplements to this article can be found, for instance three-dimensional (3D) illustrations (which is the purpose of the 3D spectacles enclosed with this issue) and a quiz (http:@appendix.niwi. knaw.nl).
NASA Astrophysics Data System (ADS)
Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang, Xinhui; Niederer, Peter; Bergmann, Helmar
2005-05-01
3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs—which are perspective summed voxel renderings—is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB.
A coarse-to-fine approach for medical hyperspectral image classification with sparse representation
NASA Astrophysics Data System (ADS)
Chang, Lan; Zhang, Mengmeng; Li, Wei
2017-10-01
A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.
Geyer, Peter; Blank, Hilbert; Alheit, Horst
2006-03-01
The suitability of the storage phosphor plate system ACR 2000 RT (Eastman Kodak Corp., Rochester, MN, USA), that is destined for portal verification as well as for portal simulation imaging in radiotherapy, had to be proven by the comparison with a highly sensitive verification film. The comparison included portal verification images of different regions (head and neck, thorax, abdomen, and pelvis) irradiated with 6- and 15-MV photons and electrons. Each portal verification image was done at the storage screen and the EC film as well, using the EC-L cassettes (both: Eastman Kodak Corp., Rochester, MN, USA) for both systems. The soft-tissue and bony contrast and the brightness were evaluated and compared in a ranking of the two compared images. Different phantoms were irradiated to investigate the high- and low-contrast resolution. To account for quality assurance application, the short-time exposure of the unpacked and irradiated storage screen by green and red room lasers was also investigated. In general, the quality of the processed ACR images was slightly higher than that of the films, mostly due to cases of an insufficient exposure to the film. The storage screen was able to verify electron portals even for low electron energies with only minor photon contamination. The laser lines were sharply and clearly visible on the ACR images. The ACR system may replace the film without any noticeable decrease in image quality thereby reducing processing time and saving the costs of films and avoiding incorrect exposures.
A cometary ion mass spectrometer
NASA Technical Reports Server (NTRS)
Shelley, E. G.; Simpson, D. A.
1984-01-01
The development of flight suitable analyzer units for that part of the GIOTTO Ion Mass Spectrometer (IMS) experiment designated the High Energy Range Spectrometer (HERS) is discussed. Topics covered include: design of the total ion-optical system for the HERS analyzer; the preparation of the design of analyzing magnet; the evaluation of microchannel plate detectors and associated two-dimensional anode arrays; and the fabrication and evaluation of two flight-suitable units of the complete ion-optical analyzer system including two-dimensional imaging detectors and associated image encoding electronics.
Sharp, G C; Kandasamy, N; Singh, H; Folkert, M
2007-10-07
This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup--up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration.
An Asymmetric Image Encryption Based on Phase Truncated Hybrid Transform
NASA Astrophysics Data System (ADS)
Khurana, Mehak; Singh, Hukum
2017-09-01
To enhance the security of the system and to protect it from the attacker, this paper proposes a new asymmetric cryptosystem based on hybrid approach of Phase Truncated Fourier and Discrete Cosine Transform (PTFDCT) which adds non linearity by including cube and cube root operation in the encryption and decryption path respectively. In this cryptosystem random phase masks are used as encryption keys and phase masks generated after the cube operation in encryption process are reserved as decryption keys and cube root operation is required to decrypt image in decryption process. The cube and cube root operation introduced in the encryption and decryption path makes system resistant against standard attacks. The robustness of the proposed cryptosystem has been analysed and verified on the basis of various parameters by simulating on MATLAB 7.9.0 (R2008a). The experimental results are provided to highlight the effectiveness and suitability of the proposed cryptosystem and prove the system is secure.
NASA Astrophysics Data System (ADS)
Aycock, Kenneth I.; Hariharan, Prasanna; Craven, Brent A.
2017-11-01
For decades, the study of biomedical fluid dynamics using optical flow visualization and measurement techniques has been limited by the inability to fabricate transparent physical models that realistically replicate the complex morphology of biological lumens. In this study, we present an approach for producing optically transparent anatomical models that are suitable for particle image velocimetry (PIV) using a common 3D inkjet printing process (PolyJet) and stock resin (VeroClear). By matching the index of refraction of the VeroClear material using a room-temperature mixture of water, sodium iodide, and glycerol, and by printing the part in an orientation such that the flat, optical surfaces are at an approximately 45° angle to the build plane, we overcome the challenges associated with using this 3D printing technique for PIV. Here, we summarize our methodology and demonstrate the process and the resultant PIV measurements of flow in an optically transparent anatomical model of the human inferior vena cava.
Some error bounds for K-iterated Gaussian recursive filters
NASA Astrophysics Data System (ADS)
Cuomo, Salvatore; Galletti, Ardelio; Giunta, Giulio; Marcellino, Livia
2016-10-01
Recursive filters (RFs) have achieved a central role in several research fields over the last few years. For example, they are used in image processing, in data assimilation and in electrocardiogram denoising. More in particular, among RFs, the Gaussian RFs are an efficient computational tool for approximating Gaussian-based convolutions and are suitable for digital image processing and applications of the scale-space theory. As is a common knowledge, the Gaussian RFs, applied to signals with support in a finite domain, generate distortions and artifacts, mostly localized at the boundaries. Heuristic and theoretical improvements have been proposed in literature to deal with this issue (namely boundary conditions). They include the case in which a Gaussian RF is applied more than once, i.e. the so called K-iterated Gaussian RFs. In this paper, starting from a summary of the comprehensive mathematical background, we consider the case of the K-iterated first-order Gaussian RF and provide the study of its numerical stability and some component-wise theoretical error bounds.
Towards Image Documentation of Grave Coverings and Epitaphs for Exhibition Purposes
NASA Astrophysics Data System (ADS)
Pomaska, G.; Dementiev, N.
2015-08-01
Epitaphs and memorials as immovable items in sacred spaces provide with their inscriptions valuable documents of history. Today not only photography or photos are suitable as presentation material for cultural assets in museums. Computer vision and photogrammetry provide methods for recording, 3D modelling, rendering under artificial light conditions as well as further options for analysis and investigation of artistry. For exhibition purposes epitaphs have been recorded by the structure from motion method. A comparison of different kinds of SFM software distributions could be worked out. The suitability of open source software in the mesh processing chain from modelling up to displaying on computer monitors should be answered. Raspberry Pi, a computer in SoC technology works as a media server under Linux applying Python scripts. Will the little computer meet the requirements for a museum and is the handling comfortable enough for staff and visitors? This contribution reports about the case study.
Chemical Characterization of Bed Material Coatingsby LA-ICP-MS and SEM-EDS
NASA Astrophysics Data System (ADS)
Piispanen, M. H.; Mustonen, A. J.; Tiainen, M. S.; Laitinen, R. S.
Bed material coatings and the consequent agglomeration of bed material are main ash-related problems in FB-boilers. The bed agglomeration is a particular problem when combusting biofuels and waste materials. Whereas SEM-EDS together with automated image processing has proven to be a convenient method to study compositional distribution in coating layers and agglomerates, it is a relatively expensive technique and is not necessarily widely available. In this contribution, we explore the suitability of LA-ICP-MS to provide analogous information of the bed.
A thermophone on porous polymeric substrate
NASA Astrophysics Data System (ADS)
Chitnis, G.; Kim, A.; Song, S. H.; Jessop, A. M.; Bolton, J. S.; Ziaie, B.
2012-07-01
In this Letter, we present a simple, low-temperature method for fabricating a wide-band (>80 kHz) thermo-acoustic sound generator on a porous polymeric substrate. We were able to achieve up to 80 dB of sound pressure level with an input power of 0.511 W. No significant surface temperature increase was observed in the device even at an input power level of 2.5 W. Wide-band ultrasonic performance, simplicity of structure, and scalability of the fabrication process make this device suitable for many ranging and imaging applications.
Gbadebo, Adenowo A; Turitsyna, Elena G; Williams, John A R
2018-01-22
We demonstrate the design and fabrication of multichannel fibre Bragg gratings (FBGs) with aperiodic channel spacings. These will be suitable for the suppression of specific spectral lines such as OH emission lines in the near infrared (NIR) which degrade ground based astronomical imaging. We discuss the design process used to meet a given specification and the fabrication challenges that can give rise to errors in the final manufactured device. We propose and demonstrate solutions to meet these challenges.
A joint asymmetric watermarking and image encryption scheme
NASA Astrophysics Data System (ADS)
Boato, G.; Conotter, V.; De Natale, F. G. B.; Fontanari, C.
2008-02-01
Here we introduce a novel watermarking paradigm designed to be both asymmetric, i.e., involving a private key for embedding and a public key for detection, and commutative with a suitable encryption scheme, allowing both to cipher watermarked data and to mark encrypted data without interphering with the detection process. In order to demonstrate the effectiveness of the above principles, we present an explicit example where the watermarking part, based on elementary linear algebra, and the encryption part, exploiting a secret random permutation, are integrated in a commutative scheme.
Khuri-Yakub, B T; Oralkan, Omer; Nikoozadeh, Amin; Wygant, Ira O; Zhuang, Steve; Gencel, Mustafa; Choe, Jung Woo; Stephens, Douglas N; de la Rama, Alan; Chen, Peter; Lin, Feng; Dentinger, Aaron; Wildes, Douglas; Thomenius, Kai; Shivkumar, Kalyanam; Mahajan, Aman; Seo, Chi Hyung; O'Donnell, Matthew; Truong, Uyen; Sahn, David J
2010-01-01
Capacitive micromachined ultrasonic transducer (CMUT) arrays are conveniently integrated with frontend integrated circuits either monolithically or in a hybrid multichip form. This integration helps with reducing the number of active data processing channels for 2D arrays. This approach also preserves the signal integrity for arrays with small elements. Therefore CMUT arrays integrated with electronic circuits are most suitable to implement miniaturized probes required for many intravascular, intracardiac, and endoscopic applications. This paper presents examples of miniaturized CMUT probes utilizing 1D, 2D, and ring arrays with integrated electronics.
Going fully digital: Perspective of a Dutch academic pathology lab
Stathonikos, Nikolas; Veta, Mitko; Huisman, André; van Diest, Paul J.
2013-01-01
During the last years, whole slide imaging has become more affordable and widely accepted in pathology labs. Digital slides are increasingly being used for digital archiving of routinely produced clinical slides, remote consultation and tumor boards, and quantitative image analysis for research purposes and in education. However, the implementation of a fully digital Pathology Department requires an in depth look into the suitability of digital slides for routine clinical use (the image quality of the produced digital slides and the factors that affect it) and the required infrastructure to support such use (the storage requirements and integration with lab management and hospital information systems). Optimization of digital pathology workflow requires communication between several systems, which can be facilitated by the use of open standards for digital slide storage and scanner management. Consideration of these aspects along with appropriate validation of the use of digital slides for routine pathology can pave the way for pathology departments to go “fully digital.” In this paper, we summarize our experiences so far in the process of implementing a fully digital workflow at our Pathology Department and the steps that are needed to complete this process. PMID:23858390
Correction of projective distortion in long-image-sequence mosaics without prior information
NASA Astrophysics Data System (ADS)
Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie
2010-04-01
Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
NASA Astrophysics Data System (ADS)
Kang, Jin; Huo, Fangjun; Chao, Jianbin; Yin, Caixia
2018-04-01
Small molecule biothiols, including cysteine (Cys), homocysteine (Hcy), and glutathione (GSH), play many crucial roles in physiological processes. In this work, we have prepared a nitroolefin-based BODIPY fluorescent probe with excellent water solubility for detection thiols, which displayed ratiometric fluorescent signal for thiols. Incorporation of a nitroolefin unit to the BODIPY dye would transform it into a strong Michael acceptor, which would be highly susceptible to sulfhydryl nucleophiles. This probe shows an obvious ratio change upon response with thiols, an increase of the emission at 517 nm along with a concomitant decrease of fluorescence peak at 573 nm. Moreover, these successes of intracellular imaging experiments in A549 cells indicated that this probe is suitable for imaging of ex-/endogenous thiols in living cells.
[Activities of Center for Nondestructive Evaluation, Iowa State University
NASA Technical Reports Server (NTRS)
Gray, Joe
2002-01-01
The final report of NASA funded activities at Iowa State University (ISU) for the period between 1/96 and 1/99 includes two main areas of activity. The first is the development and delivery of an x-ray simulation package suitable for evaluating the impact of parameters affects the inspectability of an assembly of parts. The second area was the development of images processing tools to remove reconstruction artifacts in x-ray laminagraphy images. The x-ray simulation portion of this work was done by J. Gray and the x-ray laminagraphy work was done by J. Basart. The report is divided into two sections covering the two activities respectively. In addition to this work reported the funding also covered NASA's membership in the NSF University/Industrial Cooperative Research Center.
NASA Astrophysics Data System (ADS)
Deán-Ben, Xosé Luís.; Ermolayev, Vladimir; Mandal, Subhamoy; Ntziachristos, Vasilis; Razansky, Daniel
2016-03-01
Imaging plays an increasingly important role in clinical management and preclinical studies of cancer. Application of optical molecular imaging technologies, in combination with highly specific contrast agent approaches, eminently contributed to understanding of functional and histological properties of tumors and anticancer therapies. Yet, optical imaging exhibits deterioration in spatial resolution and other performance metrics due to light scattering in deep living tissues. High resolution molecular imaging at the whole-organ or whole-body scale may therefore bring additional understanding of vascular networks, blood perfusion and microenvironment gradients of malignancies. In this work, we constructed a volumetric multispectral optoacoustic tomography (vMSOT) scanner for cancer imaging in preclinical models and explored its capacity for real-time 3D intravital imaging of whole breast cancer allografts in mice. Intrinsic tissue properties, such as blood oxygenation gradients, along with the distribution of externally administered liposomes carrying clinically-approved indocyanine green dye (lipo-ICG) were visualized in order to study vascularization, probe penetration and extravasation kinetics in different regions of interest within solid tumors. The use of v-MSOT along with the application of volumetric image analysis and perfusion tracking tools for studies of pathophysiological processes within microenvironment gradients of solid tumors demonstrated superior volumetric imaging system performance with sustained competitive resolution and imaging depth suitable for investigations in preclinical cancer models.
A biological phantom for evaluation of CT image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.
2014-03-01
In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Wang, Xuefeng
2017-01-01
This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees. PMID:28749977
Wu, Chunyan; Wang, Xuefeng
2017-01-01
This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees.
NASA Astrophysics Data System (ADS)
Ahani Amineh, Zainab Banoo; Hashemian, Seyyed Jamal Al-Din; Magholi, Alireza
2017-08-01
Hamoon-Jazmoorian plain is located in southeast of Iran. Overexploitation of groundwater in this plain has led to water level decline and caused serious problems such as land subsidence, aquifer destruction and water quality degradation. The increasing population and agricultural development along with drought and climate change, have further increased the pressure on water resources in this region over the last years. In order to overcome such crisis, introduction of surface water into an aquifer at particular locations can be a suitable solution. A wide variety of methods have been developed to recharge groundwater, one of which is aquifer storage and recovery (ASR). One of the fundamental principles of making such systems is delineation of suitable areas based on scientific and natural facts in order to achieve relevant objectives. To that end, the Multi Criteria Decision Making (MCDM) in conjunction with the Geographic Information Systems (GIS) was applied in this study. More specifically, nine main parameters including depth of runoff as the considered source of water, morphology of the earth surface features such as geology, geomorphology, land use and land cover, drainage and aquifer characteristics along with quality of water in the aquifer were considered as the main layers in GIS. The runoff water available for artificial recharge in the basin was estimated through Soil Conservation Service (SCS) curve number method. The weighted curve number for each watershed was derived through spatial intersection of land use and hydrological soil group layers. Other thematic layers were extracted from satellite images, topographical map, and other collateral data sources, then weighed according to their influence in locating process. The Analytical Hierarchy Process (AHP) method was then used to calculate weights of individual parameters. The normalized weighted layers were then overlaid to build up the recharge potential map. The results revealed that 34% of the total area is suitable and very suitable for groundwater recharge.
Hajizadeh-Safar, M; Ghorbani, M; Khoshkharam, S; Ashrafi, Z
2014-07-01
Gamma camera is an important apparatus in nuclear medicine imaging. Its detection part is consists of a scintillation detector with a heavy collimator. Substitution of semiconductor detectors instead of scintillator in these cameras has been effectively studied. In this study, it is aimed to introduce a new design of P-N semiconductor detector array for nuclear medicine imaging. A P-N semiconductor detector composed of N-SnO2 :F, and P-NiO:Li, has been introduced through simulating with MCNPX monte carlo codes. Its sensitivity with different factors such as thickness, dimension, and direction of emission photons were investigated. It is then used to configure a new design of an array in one-dimension and study its spatial resolution for nuclear medicine imaging. One-dimension array with 39 detectors was simulated to measure a predefined linear distribution of Tc(99_m) activity and its spatial resolution. The activity distribution was calculated from detector responses through mathematical linear optimization using LINPROG code on MATLAB software. Three different configurations of one-dimension detector array, horizontal, vertical one sided, and vertical double-sided were simulated. In all of these configurations, the energy windows of the photopeak were ± 1%. The results show that the detector response increases with an increase of dimension and thickness of the detector with the highest sensitivity for emission photons 15-30° above the surface. Horizontal configuration array of detectors is not suitable for imaging of line activity sources. The measured activity distribution with vertical configuration array, double-side detectors, has no similarity with emission sources and hence is not suitable for imaging purposes. Measured activity distribution using vertical configuration array, single side detectors has a good similarity with sources. Therefore, it could be introduced as a suitable configuration for nuclear medicine imaging. It has been shown that using semiconductor P-N detectors such as P-NiO:Li, N-SnO2 :F for gamma detection could be possibly applicable for design of a one dimension array configuration with suitable spatial resolution of 2.7 mm for nuclear medicine imaging.
Vision-Based Geo-Monitoring - A New Approach for an Automated System
NASA Astrophysics Data System (ADS)
Wagner, A.; Reiterer, A.; Wasmeier, P.; Rieke-Zapp, D.; Wunderlich, T.
2012-04-01
The necessity for monitoring geo-risk areas such as rock slides is growing due to the increasing probability of such events caused by environmental change. Life with threat becomes to a calculable risk by geodetic deformation monitoring. An in-depth monitoring concept with modern measurement technologies allows the estimation of the hazard potential and the prediction of life-threatening situations. The movements can be monitored by sensors, placed in the unstable slope area. In most cases, it is necessary to enter the regions at risk in order to place the sensors and maintain them. Using long-range monitoring systems (e.g. terrestrial laser scanners, total stations, ground based synthetic aperture radar) allows avoiding this risk. To close the gap between the existing low-resolution, medium-accuracy sensors and conventional (co-operative target-based) surveying methods, image-assisted total stations (IATS) are a suggestive solution. IATS offer the user (e.g. metrology expert) an image capturing system (CCD/CMOS camera) in addition to 3D point measurements. The images of the telescope's visual field are projected onto the camera's chip. With appropriate calibration, these images are accurately geo-referenced and oriented since the horizontal and vertical angles of rotation are continuously recorded. The oriented images can directly be used for direction measurements with no need for object control points or further photogrammetric orientation processes. IATS are able to provide high density deformation fields with high accuracy (down to mm range), in all three coordinate directions. Tests have shown that with suitable image processing measurements a precision of 0.05 pixel ± 0.04·σ is possible (which corresponds to 0.03 mgon ± 0.04·σ). These results have to be seen under the consideration that such measurements are image-based only. For measuring in 3D object space the precision of pointing has to be taken into account. IATS can be used in two different ways: (1) combining two measurement systems and measuring object points by spatial intersection, or (2) using one measurement system and combining image-based techniques with the integrated distance measurement unit. Beside the system configuration, the detection of features inside the captured images can be done on the basis of different approaches, e.g. template-, edge-, and/or point-based methods. Our system is able to select a suitable algorithm based on different object characteristics, such as object geometry, texture, behaviour, etc. The long-term objective is the research, development and installation of a fully-automated measurement system, including a data analysis and interpretation component. Acknowledgments: The presented research has been supported by the Alexander von Humboldt Foundation, and by the European Sciences Foundation (ESF).
An iterative method for near-field Fresnel region polychromatic phase contrast imaging
NASA Astrophysics Data System (ADS)
Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.
2017-07-01
We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.
Cheng, Jun-Hu; Sun, Da-Wen; Pu, Hong-Bin; Wang, Qi-Jun; Chen, Yu-Nan
2015-03-15
The suitability of hyperspectral imaging technique (400-1000 nm) was investigated to determine the thiobarbituric acid (TBA) value for monitoring lipid oxidation in fish fillets during cold storage at 4°C for 0, 2, 5, and 8 days. The PLSR calibration model was established with full spectral region between the spectral data extracted from the hyperspectral images and the reference TBA values and showed good performance for predicting TBA value with determination coefficients (R(2)P) of 0.8325 and root-mean-square errors of prediction (RMSEP) of 0.1172 mg MDA/kg flesh. Two simplified PLSR and MLR models were built and compared using the selected ten most important wavelengths. The optimised MLR model yielded satisfactory results with R(2)P of 0.8395 and RMSEP of 0.1147 mg MDA/kg flesh, which was used to visualise the TBA values distribution in fish fillets. The whole results confirmed that using hyperspectral imaging technique as a rapid and non-destructive tool is suitable for the determination of TBA values for monitoring lipid oxidation and evaluation of fish freshness. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-01
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-14
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.
Liu, Liwei; Lin, Guimiao; Yin, Feng; Law, Wing-Cheung; Yong, Ken-Tye
2016-04-01
Optical imaging techniques are becoming increasingly urgent for the early detection and monitoring the progression of tumor development. However, tumor vasculature imaging has so far been largely unexplored because of the lack of suitable optical probes. In this study, we demonstrated the preparation of near-infrared (NIR) fluorescent RGD peptide probes for noninvasive imaging of tumor vasculature during tumor angiogenesis. The peptide optical probes combined the advantages of NIR emission and RGD peptide, which possesses minimal biological absorption and specially targets the integrin, which highly expressed on activated tumor endothelial cells. In vivo optical imaging of nude mice bearing pancreatic tumor showed that systemically delivered NIR probes enabled us to visualize the tumors at 24 hours post-injection. In addition, we have performed in vivo toxicity study on the prepared fluorescent RGD peptide probes formulation. The blood test results and histological analysis demonstrated that no obvious toxicity was found for the mice treated with RGD peptide probes for two weeks. These studies suggest that the NIR fluorescent peptide probes can be further designed and employed for ultrasensitive fluorescence imaging of angiogenic tumor vasculature, as well as imaging of other pathophysiological processes accompanied by activation of endothelial cells. © 2016 Wiley Periodicals, Inc.
Truncation-based energy weighting string method for efficiently resolving small energy barriers
NASA Astrophysics Data System (ADS)
Carilli, Michael F.; Delaney, Kris T.; Fredrickson, Glenn H.
2015-08-01
The string method is a useful numerical technique for resolving minimum energy paths in rare-event barrier-crossing problems. However, when applied to systems with relatively small energy barriers, the string method becomes inconvenient since many images trace out physically uninteresting regions where the barrier has already been crossed and recrossing is unlikely. Energy weighting alleviates this difficulty to an extent, but typical implementations still require the string's endpoints to evolve to stable states that may be far from the barrier, and deciding upon a suitable energy weighting scheme can be an iterative process dependent on both the application and the number of images used. A second difficulty arises when treating nucleation problems: for later images along the string, the nucleus grows to fill the computational domain. These later images are unphysical due to confinement effects and must be discarded. In both cases, computational resources associated with unphysical or uninteresting images are wasted. We present a new energy weighting scheme that eliminates all of the above difficulties by actively truncating the string as it evolves and forcing all images, including the endpoints, to remain within and cover uniformly a desired barrier region. The calculation can proceed in one step without iterating on strategy, requiring only an estimate of an energy value below which images become uninteresting.
An automatic panoramic image reconstruction scheme from dental computed tomography images
Papakosta, Thekla K; Savva, Antonis D; Economopoulos, Theodore L; Gröhndal, H G
2017-01-01
Objectives: Panoramic images of the jaws are extensively used for dental examinations and/or surgical planning because they provide a general overview of the patient's maxillary and mandibular regions. Panoramic images are two-dimensional projections of three-dimensional (3D) objects. Therefore, it should be possible to reconstruct them from 3D radiographic representations of the jaws, produced by CBCT scanning, obviating the need for additional exposure to X-rays, should there be a need of panoramic views. The aim of this article is to present an automated method for reconstructing panoramic dental images from CBCT data. Methods: The proposed methodology consists of a series of sequential processing stages for detecting a fitting dental arch which is used for projecting the 3D information of the CBCT data to the two-dimensional plane of the panoramic image. The detection is based on a template polynomial which is constructed from a training data set. Results: A total of 42 CBCT data sets of real clinical pre-operative and post-operative representations from 21 patients were used. Eight data sets were used for training the system and the rest for testing. Conclusions: The proposed methodology was successfully applied to CBCT data sets, producing corresponding panoramic images, suitable for examining pre-operatively and post-operatively the patients' maxillary and mandibular regions. PMID:28112548
Third party EPID with IGRT capability retrofitted onto an existing medical linear accelerator
Odero, DO; Shimm, DS
2009-01-01
Radiation therapy requires precision to avoid unintended irradiation of normal organs. Electronic Portal Imaging Devices (EPIDs), can help with precise patient positioning for accurate treatment. EPIDs are now bundled with new linear accelerators, or they can be purchased from the Linac manufacturer for retrofit. Retrofitting a third party EPID to a linear accelerator can pose challenges. The authors describe a relatively inexpensive third party CCD camera-based EPID manufactured by TheraView (Cablon Medical B.V.), installed onto a Siemens Primus linear accelerator, and integrated with a Lantis record and verify system, an Oldelft simulator with Digital Therapy Imaging (DTI) unit, and a Philips ADAC Pinnacle treatment planning system (TPS). This system integrates well with existing equipment and its software can process DICOM images from other sources. The system provides a complete imaging system that eliminates the need for separate software for portal image viewing, interpretation, analysis, archiving, image guided radiation therapy and other image management applications. It can also be accessed remotely via safe VPN tunnels. TheraView EPID retrofit therefore presents an example of a less expensive alternative to linear accelerator manufacturers’ proprietary EPIDs suitable for implementation in third world countries radiation therapy departments which are often faced with limited financial resources. PMID:21611056
Reconfigurable metasurface aperture for security screening and microwave imaging
NASA Astrophysics Data System (ADS)
Sleasman, Timothy; Imani, Mohammadreza F.; Boyarsky, Michael; Pulido-Mancera, Laura; Reynolds, Matthew S.; Smith, David R.
2017-05-01
Microwave imaging systems have seen growing interest in recent decades for applications ranging from security screening to space/earth observation. However, hardware architectures commonly used for this purpose have not seen drastic changes. With the advent of metamaterials a wealth of opportunities have emerged for honing metasurface apertures for microwave imaging systems. Recent thrusts have introduced dynamic reconfigurability directly into the aperture layer, providing powerful capabilities from a physical layer with considerable simplicity. The waveforms generated from such dynamic metasurfaces make them suitable for application in synthetic aperture radar (SAR) and, more generally, computational imaging. In this paper, we investigate a dynamic metasurface aperture capable of performing microwave imaging in the K-band (17.5-26.5 GHz). The proposed aperture is planar and promises an inexpensive fabrication process via printed circuit board techniques. These traits are further augmented by the tunability of dynamic metasurfaces, which provides the dexterity necessary to generate field patterns ranging from a sequence of steered beams to a series of uncorrelated radiation patterns. Imaging is experimentally demonstrated with a voltage-tunable metasurface aperture. We also demonstrate the aperture's utility in real-time measurements and perform volumetric SAR imaging. The capabilities of a prototype are detailed and the future prospects of general dynamic metasurface apertures are discussed.
Third party EPID with IGRT capability retrofitted onto an existing medical linear accelerator.
Odero, D O; Shimm, D S
2009-07-01
Radiation therapy requires precision to avoid unintended irradiation of normal organs. Electronic Portal Imaging Devices (EPIDs), can help with precise patient positioning for accurate treatment. EPIDs are now bundled with new linear accelerators, or they can be purchased from the Linac manufacturer for retrofit. Retrofitting a third party EPID to a linear accelerator can pose challenges. The authors describe a relatively inexpensive third party CCD camera-based EPID manufactured by TheraView (Cablon Medical B.V.), installed onto a Siemens Primus linear accelerator, and integrated with a Lantis record and verify system, an Oldelft simulator with Digital Therapy Imaging (DTI) unit, and a Philips ADAC Pinnacle treatment planning system (TPS). This system integrates well with existing equipment and its software can process DICOM images from other sources. The system provides a complete imaging system that eliminates the need for separate software for portal image viewing, interpretation, analysis, archiving, image guided radiation therapy and other image management applications. It can also be accessed remotely via safe VPN tunnels. TheraView EPID retrofit therefore presents an example of a less expensive alternative to linear accelerator manufacturers' proprietary EPIDs suitable for implementation in third world countries radiation therapy departments which are often faced with limited financial resources.
Welding Penetration Control of Fixed Pipe in TIG Welding Using Fuzzy Inference System
NASA Astrophysics Data System (ADS)
Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo
This paper presents a study on welding penetration control of fixed pipe in Tungsten Inert Gas (TIG) welding using fuzzy inference system. The welding penetration control is essential to the production quality welds with a specified geometry. For pipe welding using constant arc current and welding speed, the bead width becomes wider as the circumferential welding of small diameter pipes progresses. Having welded pipe in fixed position, obviously, the excessive arc current yields burn through of metals; in contrary, insufficient arc current produces imperfect welding. In order to avoid these errors and to obtain the uniform weld bead over the entire circumference of the pipe, the welding conditions should be controlled as the welding proceeds. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position using the AC welding machine. The monitoring system used a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Simulation of welding control using fuzzy inference system was constructed to simulate the welding control process. The simulation result shows that fuzzy controller was suitable for controlling the welding speed and appropriate to be implemented into the welding system. A series of experiments was conducted to evaluate the performance of the fuzzy controller. The experimental results show the effectiveness of the control system that is confirmed by sound welds.
Molecular imaging in the framework of personalized cancer medicine.
Belkić, Dzevad; Belkić, Karen
2013-11-01
With our increased understanding of cancer cell biology, molecular imaging offers a strategic bridge to oncology. This complements anatomic imaging, particularly magnetic resonance (MR) imaging, which is sensitive but not specific. Among the potential harms of false positive findings is lowered adherence to recommended surveillance post-therapy and by persons at increased cancer risk. Positron emission tomography (PET) plus computerized tomography (CT) is the molecular imaging modality most widely used in oncology. In up to 40% of cases, PET-CT leads to changes in therapeutic management. Newer PET tracers can detect tumor hypoxia, bone metastases in androgen-sensitive prostate cancer, and human epidermal growth factor receptor type 2 (HER2)-expressive tumors. Magnetic resonance spectroscopy provides insight into several metabolites at the same time. Combined with MRI, this yields magnetic resonance spectroscopic imaging (MRSI), which does not entail ionizing radiation and is thus suitable for repeated monitoring. Using advanced signal processing, quantitative information can be gleaned about molecular markers of brain, breast, prostate and other cancers. Radiation oncology has benefited from molecular imaging via PET-CT and MRSI. Advanced mathematical approaches can improve dose planning in stereotactic radiosurgery, stereotactic body radiotherapy and high dose-rate brachytherapy. Molecular imaging will likely impact profoundly on clinical decision making in oncology. Molecular imaging via MR could facilitate early detection especially in persons at high risk for specific cancers.
Karaçalı, Bilge; Vamvakidou, Alexandra P; Tözeren, Aydın
2007-01-01
Background Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Methods Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Results Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Conclusion Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development. PMID:17822559
Chen, Yan; James, Jonathan J; Turnbull, Anne E; Gale, Alastair G
2015-10-01
To establish whether lower resolution, lower cost viewing devices have the potential to deliver mammographic interpretation training. On three occasions over eight months, fourteen consultant radiologists and reporting radiographers read forty challenging digital mammography screening cases on three different displays: a digital mammography workstation, a standard LCD monitor, and a smartphone. Standard image manipulation software was available for use on all three devices. Receiver operating characteristic (ROC) analysis and ANOVA (Analysis of Variance) were used to determine the significance of differences in performance between the viewing devices with/without the application of image manipulation software. The effect of reader's experience was also assessed. Performance was significantly higher (p < .05) on the mammography workstation compared to the other two viewing devices. When image manipulation software was applied to images viewed on the standard LCD monitor, performance improved to mirror levels seen on the mammography workstation with no significant difference between the two. Image interpretation on the smartphone was uniformly poor. Film reader experience had no significant effect on performance across all three viewing devices. Lower resolution standard LCD monitors combined with appropriate image manipulation software are capable of displaying mammographic pathology, and are potentially suitable for delivering mammographic interpretation training. • This study investigates potential devices for training in mammography interpretation. • Lower resolution standard LCD monitors are potentially suitable for mammographic interpretation training. • The effect of image manipulation tools on mammography workstation viewing is insignificant. • Reader experience had no significant effect on performance in all viewing devices. • Smart phones are not suitable for displaying mammograms.
Automated Coronal Loop Identification using Digital Image Processing Techniques
NASA Astrophysics Data System (ADS)
Lee, J. K.; Gary, G. A.; Newman, T. S.
2003-05-01
The results of a Master's thesis study of computer algorithms for automatic extraction and identification (i.e., collectively, "detection") of optically-thin, 3-dimensional, (solar) coronal-loop center "lines" from extreme ultraviolet and X-ray 2-dimensional images will be presented. The center lines, which can be considered to be splines, are proxies of magnetic field lines. Detecting the loops is challenging because there are no unique shapes, the loop edges are often indistinct, and because photon and detector noise heavily influence the images. Three techniques for detecting the projected magnetic field lines have been considered and will be described in the presentation. The three techniques used are (i) linear feature recognition of local patterns (related to the inertia-tensor concept), (ii) parametric space inferences via the Hough transform, and (iii) topological adaptive contours (snakes) that constrain curvature and continuity. Since coronal loop topology is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information that has also been incorporated into the detection process. Synthesized images have been generated to benchmark the suitability of the three techniques, and the performance of the three techniques on both synthesized and solar images will be presented and numerically evaluated in the presentation. The process of automatic detection of coronal loops is important in the reconstruction of the coronal magnetic field where the derived magnetic field lines provide a boundary condition for magnetic models ( cf. , Gary (2001, Solar Phys., 203, 71) and Wiegelmann & Neukirch (2002, Solar Phys., 208, 233)). . This work was supported by NASA's Office of Space Science - Solar and Heliospheric Physics Supporting Research and Technology Program.
Moore, Craig S.; Liney, Gary P.; Beavis, Andrew W.
2004-01-01
We are implementing the use of magnetic resonance (MR) images for head and neck radiotherapy planning, which involves their registration with computed tomography (CT). The quality assurance (QA) of the registration process was an initial step of this program. A phantom was built, and appropriate materials were identified to produce clinically relevant MR T1 and T2 contrast for its constituent “anatomy.” We performed a characterization of the distortion detectable within our phantom. Finally, we assessed the accuracy of image registration by contouring structures in the registered/fused data sets using the treatment planning system. Each structure was contoured using each modality, in turn, blind of the other. The position, area, and perimeter of each structure were assessed as a measure of accuracy of the entire image registration process. Distortion effects in the MR image were shown to be minimized by choosing a suitable (≥±30 kHz) receiver bandwidth. Remaining distortion was deemed clinically acceptable within ±15 cm of the magnetic field isocenter. A coefficient of agreement (A) analysis gave values to be within 9% of unity, where A=RaRp and Ra/p is the ratio of the area/perimeter of a particular structure on CT to that on MR. The center of each structure of interest agreed to within 1.8 mm. A QA process has been developed to assess the accuracy of using multimodality image registration in the planning of radiotherapy for the head and neck; we believe its introduction is feasible and safe. PACS numbers: 87.53.Xd, 87.57.Gg, 87.59.Fm; 87.61.‐c, 87.66.Xa PMID:15753931
NASA Astrophysics Data System (ADS)
Eguizabal, Alma; Real, Eusebio; Pontón, Alejandro; Calvo Diez, Marta; Val-Bernal, J. Fernando; Mayorga, Marta; Revuelta, José M.; López-Higuera, José M.; Conde, Olga M.
2014-05-01
Optical Coherence Tomography is a natural candidate for imaging biological structures just under tissue surface. Human thoracic aorta from aneurysms reveal elastin disorders and smooth muscle cell alterations when visualizing the media layer of the aortic wall, which is only some tens of microns in depth from surface. The resulting images require a suitable processing to enhance interesting disorder features and to use them as indicators for wall degradation, converting OCT into a hallmark for diagnosis of risk of aneurysm under intraoperative conditions. This work proposes gradient-based digital image processing approaches to conclude this risk. These techniques are believed to be useful in these applications as aortic wall disorders directly affect the refractive index of the tissue, having an effect on the gradient of the tissue reflectivity that conform the OCT image. Preliminary results show that the direction of the gradient contains information to estimate the tissue abnormality score. The detection of the edges of the OCT image is performed using the Canny algorithm. The edges delineate tissue disorders in the region of interest and isolate the abnormalities. These edges can be quantified to estimate a degradation score. Furthermore, the direction of the gradient seems to be a promising enhancement technique, as it detects areas of homogeneity in the region of interest. Automatic results from gradient-based strategies are finally compared to the histopathological global aortic score, which accounts for each risk factor presence and seriousness.
NASA Astrophysics Data System (ADS)
Armanetti, Paolo; Flori, Alessandra; Avigo, Cinzia; Conti, Luca; Valtancoli, Barbara; Petroni, Debora; Doumett, Saer; Cappiello, Laura; Ravagli, Costanza; Baldi, Giovanni; Bencini, Andrea; Menichetti, Luca
2018-06-01
Recently, a number of photoacoustic (PA) agents with increased tissue penetration and fine spatial resolution have been developed for molecular imaging and mapping of pathophysiological features at the molecular level. Here, we present bio-conjugated near-infrared light-absorbing magnetic nanoparticles as a new agent for PA imaging. These nanoparticles exhibit suitable absorption in the near-infrared region, with good photoacoustic signal generation efficiency and high photo-stability. Furthermore, these encapsulated iron oxide nanoparticles exhibit strong super-paramagnetic behavior and nuclear relaxivities that make them useful as magnetic resonance imaging (MRI) contrast media as well. Their simple bio-conjugation strategy, optical and chemical stability, and straightforward manipulation could enable the development of a PA probe with magnetic and spectroscopic properties suitable for in vitro and in vivo real-time imaging of relevant biological targets.
Evaluation of a Fully 3-D Bpf Method for Small Animal PET Images on Mimd Architectures
NASA Astrophysics Data System (ADS)
Bevilacqua, A.
Positron Emission Tomography (PET) images can be reconstructed using Fourier transform methods. This paper describes the performance of a fully 3-D Backprojection-Then-Filter (BPF) algorithm on the Cray T3E machine and on a cluster of workstations. PET reconstruction of small animals is a class of problems characterized by poor counting statistics. The low-count nature of these studies necessitates 3-D reconstruction in order to improve the sensitivity of the PET system: by including axially oblique Lines Of Response (LORs), the sensitivity of the system can be significantly improved by the 3-D acquisition and reconstruction. The BPF method is widely used in clinical studies because of its speed and easy implementation. Moreover, the BPF method is suitable for on-time 3-D reconstruction as it does not need any sinogram or rearranged data. In order to investigate the possibility of on-line processing, we reconstruct a phantom using the data stored in the list-mode format by the data acquisition system. We show how the intrinsically parallel nature of the BPF method makes it suitable for on-line reconstruction on a MIMD system such as the Cray T3E. Lastly, we analyze the performance of this algorithm on a cluster of workstations.
NASA Astrophysics Data System (ADS)
Sima, A. A.; Baeck, P.; Nuyts, D.; Delalieux, S.; Livens, S.; Blommaert, J.; Delauré, B.; Boonen, M.
2016-06-01
This paper gives an overview of the new COmpact hyperSpectral Imaging (COSI) system recently developed at the Flemish Institute for Technological Research (VITO, Belgium) and suitable for remotely piloted aircraft systems. A hyperspectral dataset captured from a multirotor platform over a strawberry field is presented and explored in order to assess spectral bands co-registration quality. Thanks to application of line based interference filters deposited directly on the detector wafer the COSI camera is compact and lightweight (total mass of 500g), and captures 72 narrow (FWHM: 5nm to 10 nm) bands in the spectral range of 600-900 nm. Covering the region of red edge (680 nm to 730 nm) allows for deriving plant chlorophyll content, biomass and hydric status indicators, making the camera suitable for agriculture purposes. Additionally to the orthorectified hypercube digital terrain model can be derived enabling various analyses requiring object height, e.g. plant height in vegetation growth monitoring. Geometric data quality assessment proves that the COSI camera and the dedicated data processing chain are capable to deliver very high resolution data (centimetre level) where spectral information can be correctly derived. Obtained results are comparable or better than results reported in similar studies for an alternative system based on the Fabry-Pérot interferometer.
Gaussian mixtures on tensor fields for segmentation: applications to medical imaging.
de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos
2011-01-01
In this paper, we introduce a new approach for tensor field segmentation based on the definition of mixtures of Gaussians on tensors as a statistical model. Working over the well-known Geodesic Active Regions segmentation framework, this scheme presents several interesting advantages. First, it yields a more flexible model than the use of a single Gaussian distribution, which enables the method to better adapt to the complexity of the data. Second, it can work directly on tensor-valued images or, through a parallel scheme that processes independently the intensity and the local structure tensor, on scalar textured images. Two different applications have been considered to show the suitability of the proposed method for medical imaging segmentation. First, we address DT-MRI segmentation on a dataset of 32 volumes, showing a successful segmentation of the corpus callosum and favourable comparisons with related approaches in the literature. Second, the segmentation of bones from hand radiographs is studied, and a complete automatic-semiautomatic approach has been developed that makes use of anatomical prior knowledge to produce accurate segmentation results. Copyright © 2010 Elsevier Ltd. All rights reserved.
Design of integrated eye tracker-display device for head mounted systems
NASA Astrophysics Data System (ADS)
David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.
2009-08-01
We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.
The design of composite monitoring scheme for multilevel information in crop early diseases
NASA Astrophysics Data System (ADS)
Zhang, Yan; Meng, Qinglong; Shang, Jing
2018-02-01
It is difficult to monitor and predict the crops early diseases in that the crop disease monitoring is usually monitored by visible light images and the availabilities in early warning are poor at present. The features of common nondestructive testing technology applied to the crop diseases were analyzed in this paper. Based on the changeable characteristics of the virus from the incubation period to the onset period of crop activities, the multilevel composite information monitoring scheme were designed by applying infrared thermal imaging, visible near infrared hyperspectral imaging, micro-imaging technology to the monitoring of multilevel information of crop disease infection comprehensively. The early warning process and key monitoring parameters of compound monitoring scheme are given by taking the temperature, color, structure and texture of crops as the key monitoring characteristics of disease. With overcoming the deficiency that the conventional monitoring scheme is only suitable for the observation of diseases with naked eyes, the monitoring and early warning of the incubation and early onset of the infection crops can be realized by the composite monitoring program as mentioned in this paper.
An extended algebraic reconstruction technique (E-ART) for dual spectral CT.
Zhao, Yunsong; Zhao, Xing; Zhang, Peng
2015-03-01
Compared with standard computed tomography (CT), dual spectral CT (DSCT) has many advantages for object separation, contrast enhancement, artifact reduction, and material composition assessment. But it is generally difficult to reconstruct images from polychromatic projections acquired by DSCT, because of the nonlinear relation between the polychromatic projections and the images to be reconstructed. This paper first models the DSCT reconstruction problem as a nonlinear system problem; and then extend the classic ART method to solve the nonlinear system. One feature of the proposed method is its flexibility. It fits for any scanning configurations commonly used and does not require consistent rays for different X-ray spectra. Another feature of the proposed method is its high degree of parallelism, which means that the method is suitable for acceleration on GPUs (graphic processing units) or other parallel systems. The method is validated with numerical experiments from simulated noise free and noisy data. High quality images are reconstructed with the proposed method from the polychromatic projections of DSCT. The reconstructed images are still satisfactory even if there are certain errors in the estimated X-ray spectra.
NASA Astrophysics Data System (ADS)
Wáng, Yì Xiáng J.; Idée, Jean-Marc; Corot, Claire
2015-10-01
Designing of theranostics and dual or multi-modality contrast agents are currently two of the hottest topics in biotechnology and biomaterials science. However, for single entity theranostics, a right ratio of their diagnostic component and their therapeutic component may not always be realized in a composite suitable for clinical application. For dual/multiple modality molecular imaging agents, after in vivo administration, there is an optimal time window for imaging, when an agent is imaged by one modality, the pharmacokinetics of this agent may not allow imaging by another modality. Due to reticuloendothelial system clearance, efficient in vivo delivery of nanoparticles to the lesion site is sometimes difficult. The toxicity of these entities also remains poorly understood. While the medical need of theranostics is admitted, the business model remains to be established. There is an urgent need for a global and internationally harmonized re-evaluation of the approval and marketing processes of theranostics. However, a reasonable expectation exists that, in the near future, the current obstacles will be removed, thus allowing the wide use of these very promising agents.
Compact wearable dual-mode imaging system for real-time fluorescence image-guided surgery.
Zhu, Nan; Huang, Chih-Yu; Mondal, Suman; Gao, Shengkui; Huang, Chongyuan; Gruev, Viktor; Achilefu, Samuel; Liang, Rongguang
2015-09-01
A wearable all-plastic imaging system for real-time fluorescence image-guided surgery is presented. The compact size of the system is especially suitable for applications in the operating room. The system consists of a dual-mode imaging system, see-through goggle, autofocusing, and auto-contrast tuning modules. The paper will discuss the system design and demonstrate the system performance.
Determining the Molecular Growth Mechanisms of Protein Crystal faces by Atomic Force Microscopy
NASA Technical Reports Server (NTRS)
Li, Huayu; Nadarajah, Arunan; Pusey, Marc L.
1998-01-01
A high resolution atomic force microscopy (AFM) study had shown that the molecular packing on the tetragonal lysozyme (110) face corresponded to only one of two possible packing arrangements, suggesting that growth layers on this face were of bimolecular height (Li et al., 1998). Theoretical analyses of the packing had also indicated that growth of this face should proceed by the addition of growth units of at least tetramer size corresponding to the 43 helices in the crystal. In this study an AFM linescan technique was devised to measure the dimensions of individual growth units on protein crystal faces. The growth process of tetragonal lysozyme crystals was slowed down by employing very low supersaturations. As a result images of individual growth events on the (110) face were observed, shown by jump discontinuities in the growth step in the linescan images. The growth unit dimension in the scanned direction was obtained by suitably averaging these images. A large number of scans in two directions on the (110) face were performed and the distribution of lysozyme aggregate sizes were obtained. A variety of growth units, all of which were 43 helical lysozyme aggregates, were shown to participate in the growth process with a 43 tetramer being the minimum observed size. This technique represents a new application for AFM allowing time resolved studies of molecular process to be carried out.
Retrieval of radiology reports citing critical findings with disease-specific customization.
Lacson, Ronilda; Sugarbaker, Nathanael; Prevedello, Luciano M; Ivan, Ip; Mar, Wendy; Andriole, Katherine P; Khorasani, Ramin
2012-01-01
Communication of critical results from diagnostic procedures between caregivers is a Joint Commission national patient safety goal. Evaluating critical result communication often requires manual analysis of voluminous data, especially when reviewing unstructured textual results of radiologic findings. Information retrieval (IR) tools can facilitate this process by enabling automated retrieval of radiology reports that cite critical imaging findings. However, IR tools that have been developed for one disease or imaging modality often need substantial reconfiguration before they can be utilized for another disease entity. THIS PAPER: 1) describes the process of customizing two Natural Language Processing (NLP) and Information Retrieval/Extraction applications - an open-source toolkit, A Nearly New Information Extraction system (ANNIE); and an application developed in-house, Information for Searching Content with an Ontology-Utilizing Toolkit (iSCOUT) - to illustrate the varying levels of customization required for different disease entities and; 2) evaluates each application's performance in identifying and retrieving radiology reports citing critical imaging findings for three distinct diseases, pulmonary nodule, pneumothorax, and pulmonary embolus. Both applications can be utilized for retrieval. iSCOUT and ANNIE had precision values between 0.90-0.98 and recall values between 0.79 and 0.94. ANNIE had consistently higher precision but required more customization. Understanding the customizations involved in utilizing NLP applications for various diseases will enable users to select the most suitable tool for specific tasks.
Retrieval of Radiology Reports Citing Critical Findings with Disease-Specific Customization
Lacson, Ronilda; Sugarbaker, Nathanael; Prevedello, Luciano M; Ivan, IP; Mar, Wendy; Andriole, Katherine P; Khorasani, Ramin
2012-01-01
Background: Communication of critical results from diagnostic procedures between caregivers is a Joint Commission national patient safety goal. Evaluating critical result communication often requires manual analysis of voluminous data, especially when reviewing unstructured textual results of radiologic findings. Information retrieval (IR) tools can facilitate this process by enabling automated retrieval of radiology reports that cite critical imaging findings. However, IR tools that have been developed for one disease or imaging modality often need substantial reconfiguration before they can be utilized for another disease entity. Purpose: This paper: 1) describes the process of customizing two Natural Language Processing (NLP) and Information Retrieval/Extraction applications – an open-source toolkit, A Nearly New Information Extraction system (ANNIE); and an application developed in-house, Information for Searching Content with an Ontology-Utilizing Toolkit (iSCOUT) – to illustrate the varying levels of customization required for different disease entities and; 2) evaluates each application’s performance in identifying and retrieving radiology reports citing critical imaging findings for three distinct diseases, pulmonary nodule, pneumothorax, and pulmonary embolus. Results: Both applications can be utilized for retrieval. iSCOUT and ANNIE had precision values between 0.90-0.98 and recall values between 0.79 and 0.94. ANNIE had consistently higher precision but required more customization. Conclusion: Understanding the customizations involved in utilizing NLP applications for various diseases will enable users to select the most suitable tool for specific tasks. PMID:22934127
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne
2013-01-01
Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki
2017-01-01
Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073
Dalton, J.B.; Bove, D.J.; Mladinich, C.S.
2005-01-01
Visible-wavelength and near-infrared image cubes of the Animas River watershed in southwestern Colorado have been acquired by the Jet Propulsion Laboratory's Airborne Visible and InfraRed Imaging Spectrometer (AVIRIS) instrument and processed using the U.S. Geological Survey Tetracorder v3.6a2 implementation. The Tetracorder expert system utilizes a spectral reference library containing more than 400 laboratory and field spectra of end-member minerals, mineral mixtures, vegetation, manmade materials, atmospheric gases, and additional substances to generate maps of mineralogy, vegetation, snow, and other material distributions. Major iron-bearing, clay, mica, carbonate, sulfate, and other minerals were identified, among which are several minerals associated with acid rock drainage, including pyrite, jarosite, alunite, and goethite. Distributions of minerals such as calcite and chlorite indicate a relationship between acid-neutralizing assemblages and stream geochemistry within the watershed. Images denoting material distributions throughout the watershed have been orthorectified against digital terrain models to produce georeferenced image files suitable for inclusion in Geographic Information System databases. Results of this study are of use to land managers, stakeholders, and researchers interested in understanding a number of characteristics of the Animas River watershed.
Dark-field microscopic image stitching method for surface defects evaluation of large fine optics.
Liu, Dong; Wang, Shitong; Cao, Pin; Li, Lu; Cheng, Zhongtao; Gao, Xin; Yang, Yongying
2013-03-11
One of the challenges in surface defects evaluation of large fine optics is to detect defects of microns on surfaces of tens or hundreds of millimeters. Sub-aperture scanning and stitching is considered to be a practical and efficient method. But since there are usually few defects on the large aperture fine optics, resulting in no defects or only one run-through line feature in many sub-aperture images, traditional stitching methods encounter with mismatch problem. In this paper, a feature-based multi-cycle image stitching algorithm is proposed to solve the problem. The overlapping areas of sub-apertures are categorized based on the features they contain. Different types of overlapping areas are then stitched in different cycles with different methods. The stitching trace is changed to follow the one that determined by the features. The whole stitching procedure is a region-growing like process. Sub-aperture blocks grow bigger after each cycle and finally the full aperture image is obtained. Comparison experiment shows that the proposed method is very suitable to stitch sub-apertures that very few feature information exists in the overlapping areas and can stitch the dark-field microscopic sub-aperture images very well.
Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao
2014-11-17
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.
360 degree vision system: opportunities in transportation
NASA Astrophysics Data System (ADS)
Thibault, Simon
2007-09-01
Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.
Multimodal 3D cancer-mimicking optical phantom
Smith, Gennifer T.; Lurie, Kristen L.; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee Bowden, Audrey K.
2016-01-01
Three-dimensional (3D) organ-mimicking phantoms provide realistic imaging environments for testing various aspects of optical systems, including for evaluating new probe designs, characterizing the diagnostic potential of new technologies, and assessing novel image processing algorithms prior to validation in real tissue. We introduce and characterize the use of a new material, Dragon Skin (Smooth-On Inc.), and fabrication technique, air-brushing, for fabrication of a 3D phantom that mimics the appearance of a real organ under multiple imaging modalities. We demonstrate the utility of the material and technique by fabricating the first 3D, hollow bladder phantom with realistic normal and multi-stage pathology features suitable for endoscopic detection using the gold standard imaging technique, white light cystoscopy (WLC), as well as the complementary imaging modalities of optical coherence tomography and blue light cystoscopy, which are aimed at improving the sensitivity and specificity of WLC to bladder cancer detection. The flexibility of the material and technique used for phantom construction allowed for the representation of a wide range of diseased tissue states, ranging from inflammation (benign) to high-grade cancerous lesions. Such phantoms can serve as important tools for trainee education and evaluation of new endoscopic instrumentation. PMID:26977369
MRXCAT: Realistic numerical phantoms for cardiovascular magnetic resonance
2014-01-01
Background Computer simulations are important for validating novel image acquisition and reconstruction strategies. In cardiovascular magnetic resonance (CMR), numerical simulations need to combine anatomical information and the effects of cardiac and/or respiratory motion. To this end, a framework for realistic CMR simulations is proposed and its use for image reconstruction from undersampled data is demonstrated. Methods The extended Cardiac-Torso (XCAT) anatomical phantom framework with various motion options was used as a basis for the numerical phantoms. Different tissue, dynamic contrast and signal models, multiple receiver coils and noise are simulated. Arbitrary trajectories and undersampled acquisition can be selected. The utility of the framework is demonstrated for accelerated cine and first-pass myocardial perfusion imaging using k-t PCA and k-t SPARSE. Results MRXCAT phantoms allow for realistic simulation of CMR including optional cardiac and respiratory motion. Example reconstructions from simulated undersampled k-t parallel imaging demonstrate the feasibility of simulated acquisition and reconstruction using the presented framework. Myocardial blood flow assessment from simulated myocardial perfusion images highlights the suitability of MRXCAT for quantitative post-processing simulation. Conclusion The proposed MRXCAT phantom framework enables versatile and realistic simulations of CMR including breathhold and free-breathing acquisitions. PMID:25204441
Al-Ruzouq, Rami; Shanableh, Abdallah; Omar, Maher; Al-Khayyat, Ghadeer
2018-02-17
Waste management involves various procedures and resources for proper handling of waste materials in compliance with health codes and environmental regulations. Landfills are one of the oldest, most convenient, and cheapest methods to deposit waste. However, landfill utilization involves social, environmental, geotechnical, cost, and restrictive regulation considerations. For instance, landfills are considered a source of hazardous air pollutants that can cause health and environmental problems related to landfill gas and non-methanic organic compounds. The increasing number of sensors and availability of remotely sensed images along with rapid development of spatial technology are helping with effective landfill site selection. The present study used fuzzy membership and the analytical hierarchy process (AHP) in a geo-spatial environment for landfill site selection in the city of Sharjah, United Arab Emirates. Macro- and micro-level factors were considered; the macro-level contained social and economic factors, while the micro-level accounted for geo-environmental factors. The weighted spatial layers were combined to generate landfill suitability and overall suitability index maps. Sensitivity analysis was then carried out to rectify initial theoretical weights. The results showed that 30.25% of the study area had a high suitability index for landfill sites in the Sharjah, and the most suitable site was selected based on weighted factors. The developed fuzzy-AHP methodology can be applied in neighboring regions with similar geo-natural conditions.
Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation
NASA Astrophysics Data System (ADS)
Bourree, Loig E.
2014-05-01
Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.
ACIR: automatic cochlea image registration
NASA Astrophysics Data System (ADS)
Al-Dhamari, Ibraheem; Bauer, Sabine; Paulus, Dietrich; Lissek, Friedrich; Jacob, Roland
2017-02-01
Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea's size and its characteristics. This information helps to select suitable implants for different patients. To get these measurements, a segmentation method of cochlea medical images is needed. An important pre-processing step for good cochlea segmentation involves efficient image registration. The cochlea's small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. In this paper, an Automatic Cochlea Image Registration (ACIR) method for multi- modal human cochlea images is proposed. This method is based on using small areas that have clear structures from both input images instead of registering the complete image. It uses the Adaptive Stochastic Gradient Descent Optimizer (ASGD) and Mattes's Mutual Information metric (MMI) to estimate 3D rigid transform parameters. The use of state of the art medical image registration optimizers published over the last two years are studied and compared quantitatively using the standard Dice Similarity Coefficient (DSC). ACIR requires only 4.86 seconds on average to align cochlea images automatically and to put all the modalities in the same spatial locations without human interference. The source code is based on the tool elastix and is provided for free as a 3D Slicer plugin. Another contribution of this work is a proposed public cochlea standard dataset which can be downloaded for free from a public XNAT server.
Automation of Technology for Cancer Research.
van der Ent, Wietske; Veneman, Wouter J; Groenewoud, Arwin; Chen, Lanpeng; Tulotta, Claudia; Hogendoorn, Pancras C W; Spaink, Herman P; Snaar-Jagalska, B Ewa
2016-01-01
Zebrafish embryos can be obtained for research purposes in large numbers at low cost and embryos develop externally in limited space, making them highly suitable for high-throughput cancer studies and drug screens. Non-invasive live imaging of various processes within the larvae is possible due to their transparency during development, and a multitude of available fluorescent transgenic reporter lines.To perform high-throughput studies, handling large amounts of embryos and larvae is required. With such high number of individuals, even minute tasks may become time-consuming and arduous. In this chapter, an overview is given of the developments in the automation of various steps of large scale zebrafish cancer research for discovering important cancer pathways and drugs for the treatment of human disease. The focus lies on various tools developed for cancer cell implantation, embryo handling and sorting, microfluidic systems for imaging and drug treatment, and image acquisition and analysis. Examples will be given of employment of these technologies within the fields of toxicology research and cancer research.
Endoscopic fluorescence imaging for early assessment of anastomotic recurrence of Crohn's disease
NASA Astrophysics Data System (ADS)
Mordon, Serge R.; Maunoury, Vincent; Geboes, K.; Klein, Olivier; Desreumaux, P.; Debaert, A.; Colombel, Jean-Frederic
1999-02-01
Crohn's disease is an inflammatory bowel disease of unknown etiology. The mechanism of the initial mucosal alterations is still unclear: ulcerations overlying lymphoid follicles and/or vasculitis have been proposed as the early lesions. We have developed a new and original method combining endoscopy of fluorescence angiography for identifying the early pathological lesions, occurring in the neo-terminal ileum after right ileocolonic resection. The patient population consisted of 10 subjects enrolled in a prospective protocol of endoscopic follow-up at 3 and 12 months after surgery. Fluorescence imaging showed small spots giving a bright fluorescence distributed singly in mucosa which appeared normal in routine endoscopy. Histopathological examination demonstrated that the fluorescence of small spots originated from small, usually superficial, erosive lesions. In several cases, these erosive lesions occurred over lymphoid follicles. Endoscopic fluorescence imaging provides a suitable means of investigating the initial aspect of the Crohn's disease process in displaying some correlative findings between fluorescent aspects and early pathological mucosal alterations.
Active control of jet flowfields
NASA Astrophysics Data System (ADS)
Kibens, Valdis; Wlezien, Richard W.
1987-06-01
Passive and active control of jet shear layer development were investigated as mechanisms for modifying the global characteristics of jet flowfields. Slanted and stepped indeterminate origin (I.O.) nozzles were used as passive, geometry-based control devices which modified the flow origins. Active control techniques were also investigated, in which periodic acoustic excitation signals were injected into the I.O. nozzle shear layers. Flow visualization techniques based on a pulsed copper-vapor laser were used in a phase-conditioned image acquisition mode to assemble optically averaged sets of images acquired at known times throughout the repetition cycle of the basic flow oscillation period. Hot wire data were used to verify the effect of the control techniques on the mean and fluctuating flow properties. The flow visualization images were digitally enhanced and processed to show locations of prominent vorticity concentrations. Three-dimensional vortex interaction patterns were assembled in a format suitable for movie mode on a graphic display workstation, showing the evolution of three-dimensional vortex system in time.
Large Area Microencapsulated Reflective Guest-Host Liquid Crystal Displays and Their Applications
NASA Astrophysics Data System (ADS)
Nakai, Yutaka; Tanaka, Masao; Enomoto, Shintaro; Iwanaga, Hiroki; Hotta, Aira; Kobayashi, Hitoshi; Oka, Toshiyuki; Kizaki, Yukio; Kidzu, Yuko; Naito, Katsuyuki
2002-07-01
We have developed reflective liquid crystal displays using microencapsulated guest-host liquid crystals, whose size was sufficiently large for viewing documents. A high-brightness image can be realized because there is no need for polarizers. Easy fabrication processes, consisting of screen-printing of microencapsulated liquid crystal and film adhesion, have enabled the realization of thinner and lighter cell structures. It has been confirmed that the display is tolerant of the pressures to which it would be subject in actual use. The optimization of fabrication processes has enabled the realization of reflectance uniformity in the display area and reduction of the driving voltage. Our developed display is suitable for portable information systems, such as electronic book applications.
Böttger, T; Grunewald, K; Schöbinger, M; Fink, C; Risse, F; Kauczor, H U; Meinzer, H P; Wolf, Ivo
2007-03-07
Recently it has been shown that regional lung perfusion can be assessed using time-resolved contrast-enhanced magnetic resonance (MR) imaging. Quantification of the perfusion images has been attempted, based on definition of small regions of interest (ROIs). Use of complete lung segmentations instead of ROIs could possibly increase quantification accuracy. Due to the low signal-to-noise ratio, automatic segmentation algorithms cannot be applied. On the other hand, manual segmentation of the lung tissue is very time consuming and can become inaccurate, as the borders of the lung to adjacent tissues are not always clearly visible. We propose a new workflow for semi-automatic segmentation of the lung from additionally acquired morphological HASTE MR images. First the lung is delineated semi-automatically in the HASTE image. Next the HASTE image is automatically registered with the perfusion images. Finally, the transformation resulting from the registration is used to align the lung segmentation from the morphological dataset with the perfusion images. We evaluated rigid, affine and locally elastic transformations, suitable optimizers and different implementations of mutual information (MI) metrics to determine the best possible registration algorithm. We located the shortcomings of the registration procedure and under which conditions automatic registration will succeed or fail. Segmentation results were evaluated using overlap and distance measures. Integration of the new workflow reduces the time needed for post-processing of the data, simplifies the perfusion quantification and reduces interobserver variability in the segmentation process. In addition, the matched morphological data set can be used to identify morphologic changes as the source for the perfusion abnormalities.
Reach out to one and you reach out to many: social touch affects third-party observers.
Schirmer, Annett; Reece, Christy; Zhao, Claris; Ng, Erik; Wu, Esther; Yen, Shih-Cheng
2015-02-01
Casual social touch influences emotional perceptions, attitudes, and behaviours of interaction partners. We asked whether these influences extend to third-party observers. To this end, we developed the Social Touch Picture Set comprising line drawings of dyadic interactions, half of which entailed publicly acceptable casual touch and half of which served as no-touch controls. In Experiment 1, participants provided basic image norms by rating how frequently they observed a displayed touch gesture in everyday life and how comfortable they were observing it. Results implied that some touch gestures were observed more frequently and with greater comfort than others (e.g., handshake vs. hug). All gestures, however, obtained rating scores suitable for inclusion in Experiments 2 and 3. In Experiment 2, participants rated perceived valence, arousal, and likeability of randomly presented touch and no-touch images without being explicitly informed about touch. Image characters seemed more positive, aroused, and likeable when they touched as compared to when they did not touch. Image characters seemed more negative and aroused, but were equally likeable, when they received touch as compared to when there was no physical contact. In Experiment 3, participants passively viewed touch and no-touch images while their eye movements were recorded. Differential gazing at touch as compared to no-touch images emerged within the first 500 ms following image exposure and was largely restricted to the characters' upper body. Gazing at the touching body parts (e.g., hands) was minimal and largely unaffected by touch, suggesting that touch processing occurred outside the focus of visual attention. Together, these findings establish touch as an important visual cue and provide novel insights into how this cue modulates socio-emotional processing in third-party observers. © 2014 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Laher, Russ
2012-08-01
Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.
NASA Astrophysics Data System (ADS)
Rosu-Hamzescu, Mihnea; Polonschii, Cristina; Oprea, Sergiu; Popescu, Dragos; David, Sorin; Bratu, Dumitru; Gheorghiu, Eugen
2018-06-01
Electro-optical measurements, i.e., optical waveguides and plasmonic based electrochemical impedance spectroscopy (P-EIS), are based on the sensitive dependence of refractive index of electro-optical sensors on surface charge density, modulated by an AC electrical field applied to the sensor surface. Recently, P-EIS has emerged as a new analytical tool that can resolve local impedance with high, optical spatial resolution, without using microelectrodes. This study describes a high speed image acquisition and processing system for electro-optical measurements, based on a high speed complementary metal-oxide semiconductor (CMOS) sensor and a field-programmable gate array (FPGA) board. The FPGA is used to configure CMOS parameters, as well as to receive and locally process the acquired images by performing Fourier analysis for each pixel, deriving the real and imaginary parts of the Fourier coefficients for the AC field frequencies. An AC field generator, for single or multi-sine signals, is synchronized with the high speed acquisition system for phase measurements. The system was successfully used for real-time angle-resolved electro-plasmonic measurements from 30 Hz up to 10 kHz, providing results consistent to ones obtained by a conventional electrical impedance approach. The system was able to detect amplitude variations with a relative variation of ±1%, even for rather low sampling rates per period (i.e., 8 samples per period). The PC (personal computer) acquisition and control software allows synchronized acquisition for multiple FPGA boards, making it also suitable for simultaneous angle-resolved P-EIS imaging.
In-Vivo Imaging of Cell Migration Using Contrast Enhanced MRI and SVM Based Post-Processing.
Weis, Christian; Hess, Andreas; Budinsky, Lubos; Fabry, Ben
2015-01-01
The migration of cells within a living organism can be observed with magnetic resonance imaging (MRI) in combination with iron oxide nanoparticles as an intracellular contrast agent. This method, however, suffers from low sensitivity and specificty. Here, we developed a quantitative non-invasive in-vivo cell localization method using contrast enhanced multiparametric MRI and support vector machines (SVM) based post-processing. Imaging phantoms consisting of agarose with compartments containing different concentrations of cancer cells labeled with iron oxide nanoparticles were used to train and evaluate the SVM for cell localization. From the magnitude and phase data acquired with a series of T2*-weighted gradient-echo scans at different echo-times, we extracted features that are characteristic for the presence of superparamagnetic nanoparticles, in particular hyper- and hypointensities, relaxation rates, short-range phase perturbations, and perturbation dynamics. High detection quality was achieved by SVM analysis of the multiparametric feature-space. The in-vivo applicability was validated in animal studies. The SVM detected the presence of iron oxide nanoparticles in the imaging phantoms with high specificity and sensitivity with a detection limit of 30 labeled cells per mm3, corresponding to 19 μM of iron oxide. As proof-of-concept, we applied the method to follow the migration of labeled cancer cells injected in rats. The combination of iron oxide labeled cells, multiparametric MRI and a SVM based post processing provides high spatial resolution, specificity, and sensitivity, and is therefore suitable for non-invasive in-vivo cell detection and cell migration studies over prolonged time periods.
Direct write of microlens array using digital projection photopolymerization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu Yi; Chen Shaochen
Microlens array is a key element in the field of information processing, optoelectronics, and integrated optics. Many existing fabrication processes remain expensive and complicated even though relatively low-cost replication processes have been developed. Here, we demonstrate the fabrication of microlens arrays through projection photopolymerization using a digital micromirror device (DMD) as a dynamic photomask. The DMD projects grayscale images, which are designed in a computer, onto a photocurable resin. The resin is then solidified with its thickness determined by a grayscale ultraviolet light and exposure time. Therefore, various geometries can be formed in a single-step, massively parallel fashion. We presentmore » microlens arrays made of acrylate-based polymer precursor. The physical and optical characteristics of the resulting lenses suggest that this fabrication technique is potentially suitable for applications in integrated optics.« less
Processing Ti-25Ta-5Zr Bioalloy via Anodic Oxidation Procedure at High Voltage
NASA Astrophysics Data System (ADS)
Ionita, Daniela; Grecu, Mihaela; Dilea, Mirela; Cojocaru, Vasile Danut; Demetrescu, Ioana
2011-12-01
The current paper reports the processing of Ti-25Ta-5Zr bioalloy via anodic oxidation in NH4BF4 solution under constant potentiostatic conditions at high voltage to obtain more suitable properties for biomedical application. The maximum efficiency of the procedure is reached at highest applied voltage, when the corrosion rate in Hank's solution is decreased approxomately six times. The topography of the anodic layer has been studied using atomic force microscopy (AFM), and the results indicated that the anodic oxidation process increases the surface roughness. The AFM images indicated a different porosity for the anodized surfaces as well. After anodizing, the hydrophilic character of Ti-25Ta-5Zr samples has increased. A good correlation between corrosion rate obtained from potentiodynamic curves and corrosion rate from ions release analysis was obtained.
The design of multi-core DSP parallel model based on message passing and multi-level pipeline
NASA Astrophysics Data System (ADS)
Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong
2017-10-01
Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.
NASA Technical Reports Server (NTRS)
Johnson, J. R. (Principal Investigator)
1974-01-01
The author has identified the following significant results. The broad scale vegetation classification was developed for a 3,200 sq mile area in southeastern Arizona. The 31 vegetation types were derived from association tables which contained information taken at about 500 ground sites. The classification provided an information base that was suitable for use with small scale photography. A procedure was developed and tested for objectively comparing photo images. The procedure consisted of two parts, image groupability testing and image complexity testing. The Apollo and ERTS photos were compared for relative suitability as first stage stratification bases in two stage proportional probability sampling. High altitude photography was used in common at the second stage.
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering
Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro
2017-01-01
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.
Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru
2017-11-09
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.
How to Build a Hybrid Neurofeedback Platform Combining EEG and fMRI
Mano, Marsel; Lécuyer, Anatole; Bannier, Elise; Perronnet, Lorraine; Noorzadeh, Saman; Barillot, Christian
2017-01-01
Multimodal neurofeedback estimates brain activity using information acquired with more than one neurosignal measurement technology. In this paper we describe how to set up and use a hybrid platform based on simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), then we illustrate how to use it for conducting bimodal neurofeedback experiments. The paper is intended for those willing to build a multimodal neurofeedback system, to guide them through the different steps of the design, setup, and experimental applications, and help them choose a suitable hardware and software configuration. Furthermore, it reports practical information from bimodal neurofeedback experiments conducted in our lab. The platform presented here has a modular parallel processing architecture that promotes real-time signal processing performance and simple future addition and/or replacement of processing modules. Various unimodal and bimodal neurofeedback experiments conducted in our lab showed high performance and accuracy. Currently, the platform is able to provide neurofeedback based on electroencephalography and functional magnetic resonance imaging, but the architecture and the working principles described here are valid for any other combination of two or more real-time brain activity measurement technologies. PMID:28377691
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015
Compact wearable dual-mode imaging system for real-time fluorescence image-guided surgery
Zhu, Nan; Huang, Chih-Yu; Mondal, Suman; Gao, Shengkui; Huang, Chongyuan; Gruev, Viktor; Achilefu, Samuel; Liang, Rongguang
2015-01-01
Abstract. A wearable all-plastic imaging system for real-time fluorescence image-guided surgery is presented. The compact size of the system is especially suitable for applications in the operating room. The system consists of a dual-mode imaging system, see-through goggle, autofocusing, and auto-contrast tuning modules. The paper will discuss the system design and demonstrate the system performance. PMID:26358823
Fluorescence Live Cell Imaging
Ettinger, Andreas
2014-01-01
Fluorescence microscopy of live cells has become an integral part of modern cell biology. Fluorescent protein tags, live cell dyes, and other methods to fluorescently label proteins of interest provide a range of tools to investigate virtually any cellular process under the microscope. The two main experimental challenges in collecting meaningful live cell microscopy data are to minimize photodamage while retaining a useful signal-to-noise ratio, and to provide a suitable environment for cells or tissues to replicate physiological cell dynamics. This chapter aims to give a general overview on microscope design choices critical for fluorescence live cell imaging that apply to most fluorescence microscopy modalities, and on environmental control with a focus on mammalian tissue culture cells. In addition, we provide guidance on how to design and evaluate fluorescent protein constructs by spinning disk confocal microscopy. PMID:24974023
Research and implementation of SATA protocol link layer based on FPGA
NASA Astrophysics Data System (ADS)
Liu, Wen-long; Liu, Xue-bin; Qiang, Si-miao; Yan, Peng; Wen, Zhi-gang; Kong, Liang; Liu, Yong-zheng
2018-02-01
In order to solve the problem high-performance real-time, high-speed the image data storage generated by the detector. In this thesis, it choose an suitable portable image storage hard disk of SATA interface, it is relative to the existing storage media. It has a large capacity, high transfer rate, inexpensive, power-down data which is not lost, and many other advantages. This paper focuses on the link layer of the protocol, analysis the implementation process of SATA2.0 protocol, and build state machines. Then analyzes the characteristics resources of Kintex-7 FPGA family, builds state machines according to the agreement, write Verilog implement link layer modules, and run the simulation test. Finally, the test is on the Kintex-7 development board platform. It meets the requirements SATA2.0 protocol basically.
Nanofabrication of insulated scanning probes for electromechanical imaging in liquid solutions
Noh, Joo Hyon; Nikiforov, Maxim; Kalinin, Sergei V.; Vertegel, Alexey A.; Rack, Philip D.
2011-01-01
In this paper, the fabrication and electrical and electromechanical characterization of insulated scanning probes have been demonstrated in liquid solutions. The silicon cantilevers were sequentially coated with chromium and silicon dioxide, and the silicon dioxide was selectively etched at tip apex using focused electron beam induced etching (FEBIE) with XeF2 The chromium layer acted not only as the conductive path from the tip, but also as an etch resistant layer. This insulated scanning probe fabrication process is compatible with any commercial AFM tip and can be used to easily tailor the scanning probe tip properties because FEBIE does not require lithography. The suitability of the fabricated probes is demonstrated by imaging of standard topographical calibration grid as well as piezoresponse force microscopy (PFM) and electrical measurements in ambient and liquid environments. PMID:20702930
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.; Gruber, Jennifer R.; Hensley, Doyle W.; Verstynen, Harry A.; Oram, Timothy D.; Berger, Karen T.; Splinter, Scott C.; Horvath, Thomas J.; Kerns, Robert V.
2011-01-01
The Hypersonic Thermodynamic Infrared Measurements (HYTHIRM) project has been responsible for obtaining spatially resolved, scientifically calibrated in-flight thermal imagery of the Space Shuttle Orbiter during reentry. Starting with STS-119 in March of 2009 and continuing through to the majority of final flights of the Space Shuttle, the HYTHIRM team has to date deployed during seven Shuttle missions with a mix of airborne and ground based imaging platforms. Each deployment of the HYTHIRM team has resulted in obtaining imagery suitable for processing and comparison with computational models and wind tunnel data at Mach numbers ranging from over 18 to under Mach 5. This paper will discuss the detailed mission planning and coordination with the NASA Johnson Space Center Mission Control Center that the HYTHIRM team undergoes to prepare for and execute each mission.
Confocal Laser Scanning Microscopy, a New In Vivo Diagnostic Tool for Schistosomiasis
Holtfreter, Martha Charlotte; Nohr-Łuczak, Constanze; Guthoff, Rudolf Friedrich; Reisinger, Emil Christian
2012-01-01
Background The gold standard for the diagnosis of schistosomiasis is the detection of the parasite's characteristic eggs in urine, stool, or rectal and bladder biopsy specimens. Direct detection of eggs is difficult and not always possible in patients with low egg-shedding rates. Confocal laser scanning microscopy (CLSM) permits non-invasive cell imaging in vivo and is an established way of obtaining high-resolution images and 3-dimensional reconstructions. Recently, CLSM was shown to be a suitable method to visualize Schistosoma mansoni eggs within the mucosa of dissected mouse gut. In this case, we evaluated the suitability of CLSM to detect eggs of Schistosoma haematobium in a patient with urinary schistosomiasis and low egg-shedding rates. Methodology/Principal Findings The confocal laser scanning microscope used in this study was based on a scanning laser system for imaging the retina of a living eye, the Heidelberg Retina Tomograph II, in combination with a lens system (image modality). Standard light cystoscopy was performed using a rigid cystoscope under general anaesthesia. The CLSM endoscope was then passed through the working channel of the rigid cystoscope. The mucosal tissue of the bladder was scanned using CLSM. Schistoma haematobium eggs appeared as bright structures, with the characteristic egg shape and typical terminal spine. Conclusion/Significance We were able to detect schistosomal eggs in the urothelium of a patient with urinary schistosomiasis. Thus, CLSM may be a suitable tool for the diagnosis of schistosomiasis in humans, especially in cases where standard diagnostic tools are not suitable. PMID:22529947
Image-based modelling of skeletal muscle oxygenation
Clough, G. F.
2017-01-01
The supply of oxygen in sufficient quantity is vital for the correct functioning of all organs in the human body, in particular for skeletal muscle during exercise. Disease is often associated with both an inhibition of the microvascular supply capability and is thought to relate to changes in the structure of blood vessel networks. Different methods exist to investigate the influence of the microvascular structure on tissue oxygenation, varying over a range of application areas, i.e. biological in vivo and in vitro experiments, imaging and mathematical modelling. Ideally, all of these methods should be combined within the same framework in order to fully understand the processes involved. This review discusses the mathematical models of skeletal muscle oxygenation currently available that are based upon images taken of the muscle microvasculature in vivo and ex vivo. Imaging systems suitable for capturing the blood vessel networks are discussed and respective contrasting methods presented. The review further informs the association between anatomical characteristics in health and disease. With this review we give the reader a tool to understand and establish the workflow of developing an image-based model of skeletal muscle oxygenation. Finally, we give an outlook for improvements needed for measurements and imaging techniques to adequately investigate the microvascular capability for oxygen exchange. PMID:28202595
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data
Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.
2017-01-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.
Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V
2016-08-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.
A statistical pixel intensity model for segmentation of confocal laser scanning microscopy images.
Calapez, Alexandre; Rosa, Agostinho
2010-09-01
Confocal laser scanning microscopy (CLSM) has been widely used in the life sciences for the characterization of cell processes because it allows the recording of the distribution of fluorescence-tagged macromolecules on a section of the living cell. It is in fact the cornerstone of many molecular transport and interaction quantification techniques where the identification of regions of interest through image segmentation is usually a required step. In many situations, because of the complexity of the recorded cellular structures or because of the amounts of data involved, image segmentation either is too difficult or inefficient to be done by hand and automated segmentation procedures have to be considered. Given the nature of CLSM images, statistical segmentation methodologies appear as natural candidates. In this work we propose a model to be used for statistical unsupervised CLSM image segmentation. The model is derived from the CLSM image formation mechanics and its performance is compared to the existing alternatives. Results show that it provides a much better description of the data on classes characterized by their mean intensity, making it suitable not only for segmentation methodologies with known number of classes but also for use with schemes aiming at the estimation of the number of classes through the application of cluster selection criteria.
NASA Astrophysics Data System (ADS)
Shi, Jiyong; Chen, Wu; Zou, Xiaobo; Xu, Yiwei; Huang, Xiaowei; Zhu, Yaodi; Shen, Tingting
2018-01-01
Hyperspectral images (431-962 nm) and partial least squares (PLS) were used to detect the distribution of triterpene acids within loquat (Eriobotrya japonica) leaves. 72 fresh loquat leaves in the young group, mature group and old group were collected for hyperspectral imaging; and triterpene acids content of the loquat leaves was analyzed using high performance liquid chromatography (HPLC). Then the spectral data of loquat leaf hyperspectral images and the triterpene acids content were employed to build calibration models. After spectra pre-processing and wavelength selection, an optimum calibration model (Rp = 0.8473, RMSEP = 2.61 mg/g) for predicting triterpene acids was obtained by synergy interval partial least squares (siPLS). Finally, spectral data of each pixel in the loquat leaf hyperspectral image were extracted and substituted into the optimum calibration model to predict triterpene acids content of each pixel. Therefore, the distribution map of triterpene acids content was obtained. As shown in the distribution map, triterpene acids are accumulated mainly in the leaf mesophyll regions near the main veins, and triterpene acids concentration of young group is less than that of mature and old groups. This study showed that hyperspectral imaging is suitable to determine the distribution of active constituent content in medical herbs in a rapid and non-invasive manner.
Zarkevich, Nikolai A.; Johnson, Duane D.
2015-01-09
The nudged-elastic band (NEB) method is modified with concomitant two climbing images (C2-NEB) to find a transition state (TS) in complex energy landscapes, such as those with a serpentine minimal energy path (MEP). If a single climbing image (C1-NEB) successfully finds the TS, then C2-NEB finds it too. Improved stability of C2-NEB makes it suitable for more complex cases, where C1-NEB misses the TS because the MEP and NEB directions near the saddle point are different. Generally, C2-NEB not only finds the TS, but guarantees, by construction, that the climbing images approach it from the opposite sides along the MEP.more » In addition, C2-NEB provides an accuracy estimate from the three images: the highest-energy one and its climbing neighbors. C2-NEB is suitable for fixed-cell NEB and the generalized solid-state NEB.« less
Experimental basis of myocardial imaging with 123I-labeled hexadecenoic acid.
Poe, N D; Robinson, G D; Graham, L S; MacDonald, N S
1976-12-01
Progress in myocardial perfusion imaging has been slowed by the lack or radiopharmaceuticals with suitable physical and biologic characteristics. Hexadecenoic acid, terminally labeled with 123I, partially overcomes these limitations by providing a compound that concentrates in the myocardium in proportion to relative regional blood flow and carries a gamma-emitter with desirable detection and imaging qualities. After intravenous injection in experimental animals, the clearance half-times of hexadecenoic acid for blood and myocardium are 1.7 and 20 min, respectively. These values compare favorably with 18-carbon fatty-acid analogs labeled with 11C. In acute and chronic infarction, similar distribution patterns are found for hexadecenoic acid and 43K, which indicates that hexadecenoic acid is a suitable substitute for the potassium analogs now in use for myocardial imaging. Because of the high count rates obtainable with 123I-hexadecenoic acid, good-guality images can be acquired in as little as 2-3 min per view. Iodine-123-hexadecenoic acid is potentially a useful radiopharmaceutical for clinical application.
ERIC Educational Resources Information Center
McNiff, Shaun
1995-01-01
Discusses the studio as a therapeutic community of images where the therapist functions as keeper of the space. It is not the physical suitability that determines the suitability of the space; rather, distractions and imperfections in the space may more accurately mirror the state of psyche and so induce the passionate engagement that calls forth…