Sample records for single input image

  1. Morphological-transformation-based technique of edge detection and skeletonization of an image using a single spatial light modulator

    NASA Astrophysics Data System (ADS)

    Munshi, Soumika; Datta, A. K.

    2003-03-01

    A technique of optically detecting the edge and skeleton of an image by defining shift operations for morphological transformation is described. A (2 × 2) source array, which acts as the structuring element of morphological operations, casts four angularly shifted optical projections of the input image. The resulting dilated image, when superimposed with the complementary input image, produces the edge image. For skeletonization, the source array casts four partially overlapped output images of the inverted input image, which is negated, and the resultant image is recorded in a CCD camera. This overlapped eroded image is again eroded and then dilated, producing an opened image. The difference between the eroded and opened image is then computed, resulting in a thinner image. This procedure of obtaining a thinned image is iterated until the difference image becomes zero, maintaining the connectivity conditions. The technique has been optically implemented using a single spatial modulator and has the advantage of single-instruction parallel processing of the image. The techniques have been tested both for binary and grey images.

  2. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    PubMed

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  3. Fast single image dehazing based on image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  4. Neural network diagnosis of avascular necrosis from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Manduca, Armando; Christy, Paul S.; Ehman, Richard L.

    1993-09-01

    We have explored the use of artificial neural networks to diagnose avascular necrosis (AVN) of the femoral head from magnetic resonance images. We have developed multi-layer perceptron networks, trained with conjugate gradient optimization, which diagnose AVN from single sagittal images of the femoral head with 100% accuracy on the training data and 97% accuracy on test data. These networks use only the raw image as input (with minimal preprocessing to average the images down to 32 X 32 size and to scale the input data values) and learn to extract their own features for the diagnosis decision. Various experiments with these networks are described.

  5. Automated imaging system for single molecules

    DOEpatents

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  6. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  7. Material appearance acquisition from a single image

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Cui, Shulin; Cui, Hanwen; Yang, Lin; Wu, Tao

    2017-01-01

    The scope of this paper is to present a method of material appearance acquisition(MAA) from a single image. In this paper, material appearance is represented by spatially varying bidirectional reflectance distribution function(SVBRDF). Therefore, MAA can be reduced to the problem of recovery of each pixel's BRDF parameters from an original input image, which include diffuse coefficient, specular coefficient, normal and glossiness based on the Blinn-Phone model. In our method, the workflow of MAA includes five main phases: highlight removal, estimation of intrinsic images, shape from shading(SFS), initialization of glossiness and refining SVBRDF parameters based on IPOPT. The results indicate that the proposed technique can effectively extract the material appearance from a single image.

  8. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  9. Reducible dictionaries for single image super-resolution based on patch matching and mean shifting

    NASA Astrophysics Data System (ADS)

    Rasti, Pejman; Nasrollahi, Kamal; Orlova, Olga; Tamberg, Gert; Moeslund, Thomas B.; Anbarjafari, Gholamreza

    2017-03-01

    A single-image super-resolution (SR) method is proposed. The proposed method uses a generated dictionary from pairs of high resolution (HR) images and their corresponding low resolution (LR) representations. First, HR images and the corresponding LR ones are divided into patches of HR and LR, respectively, and then they are collected into separate dictionaries. Afterward, when performing SR, the distance between every patch of the input LR image and those of available LR patches in the LR dictionary is calculated. The minimum distance between the input LR patch and those in the LR dictionary is taken, and its counterpart from the HR dictionary is passed through an illumination enhancement process. By this technique, the noticeable change of illumination between neighbor patches in the super-resolved image is significantly reduced. The enhanced HR patch represents the HR patch of the super-resolved image. Finally, to remove the blocking effect caused by merging the patches, an average of the obtained HR image and the interpolated image obtained using bicubic interpolation is calculated. The quantitative and qualitative analyses show the superiority of the proposed technique over the conventional and state-of-art methods.

  10. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.

    PubMed

    Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen

    2018-01-19

    As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  11. Single image super-resolution based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  12. Integrated infrared and visible image sensors

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)

    2000-01-01

    Semiconductor imaging devices integrating an array of visible detectors and another array of infrared detectors into a single module to simultaneously detect both the visible and infrared radiation of an input image. The visible detectors and the infrared detectors may be formed either on two separate substrates or on the same substrate by interleaving visible and infrared detectors.

  13. Satellite Image Mosaic Engine

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2006-01-01

    A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.

  14. Single input state, single–mode fiber–based polarization sensitive optical frequency domain imaging by eigenpolarization referencing

    PubMed Central

    Lippok, Norman; Villiger, Martin; Jun, Chang–Su; Bouma, Brett E.

    2015-01-01

    Fiber–based polarization sensitive OFDI is more challenging than free–space implementations. Using multiple input states, fiber–based systems provide sample birefringence information with the benefit of a flexible sample arm but come at the cost of increased system and acquisition complexity, and either reduce acquisition speed or require increased acquisition bandwidth. Here we show that with the calibration of a single polarization state, fiber–based configurations can approach the conceptual simplicity of traditional free–space configurations. We remotely control the polarization state of the light incident at the sample using the eigenpolarization states of a wave plate as a reference, and determine the Jones matrix of the output fiber. We demonstrate this method for polarization sensitive imaging of biological samples. PMID:25927775

  15. Development of a novel 2D color map for interactive segmentation of histological images.

    PubMed

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D

    2012-05-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  16. Video-to-film color-image recorder.

    NASA Technical Reports Server (NTRS)

    Montuori, J. S.; Carnes, W. R.; Shim, I. H.

    1973-01-01

    A precision video-to-film recorder for use in image data processing systems, being developed for NASA, will convert three video input signals (red, blue, green) into a single full-color light beam for image recording on color film. Argon ion and krypton lasers are used to produce three spectral lines which are independently modulated by the appropriate video signals, combined into a single full-color light beam, and swept over the recording film in a raster format for image recording. A rotating multi-faceted spinner mounted on a translating carriage generates the raster, and an annotation head is used to record up to 512 alphanumeric characters in a designated area outside the image area.

  17. Local structure of subcellular input retinotopy in an identified visual interneuron

    NASA Astrophysics Data System (ADS)

    Zhu, Ying; Gabbiani, Fabrizio; Fabrizio Gabbiani's lab Team

    2015-03-01

    How does the spatial layout of the projections that a neuron receives impact its synaptic integration and computation? What is the mapping topography of subcellular wiring at the single neuron level? The LGMD (lobula giant movement detector) neuron in the locust is an identified neuron that responds preferentially to objects approaching on a collision course. It receives excitatory inputs from the entire visual hemifield through calcium-permeable nicotinic acetylcholine receptors. Previous work showed that the projection from the locust compound eye to the LGMD preserved retinotopy down to the level of a single ommatidium (facet) by employing in vivo widefield calcium imaging. Because widefield imaging relies on global excitation of the preparation and has a relatively low resolution, previous work could not investigate this retinotopic mapping at the level of individual thin dendritic branches. Our current work employs a custom-built two-photon microscope with sub-micron resolution in conjunction with a single-facet stimulation setup that provides visual stimuli to the single ommatidium of locust adequate to explore the local structure of this retinotopy at a finer level. We would thank NIMH for funding this research.

  18. Relative optical navigation around small bodies via Extreme Learning Machine

    NASA Astrophysics Data System (ADS)

    Law, Andrew M.

    To perform close proximity operations under a low-gravity environment, relative and absolute positions are vital information to the maneuver. Hence navigation is inseparably integrated in space travel. Extreme Learning Machine (ELM) is presented as an optical navigation method around small celestial bodies. Optical Navigation uses visual observation instruments such as a camera to acquire useful data and determine spacecraft position. The required input data for operation is merely a single image strip and a nadir image. ELM is a machine learning Single Layer feed-Forward Network (SLFN), a type of neural network (NN). The algorithm is developed on the predicate that input weights and biases can be randomly assigned and does not require back-propagation. The learned model is the output layer weights which are used to calculate a prediction. Together, Extreme Learning Machine Optical Navigation (ELM OpNav) utilizes optical images and ELM algorithm to train the machine to navigate around a target body. In this thesis the asteroid, Vesta, is the designated celestial body. The trained ELMs estimate the position of the spacecraft during operation with a single data set. The results show the approach is promising and potentially suitable for on-board navigation.

  19. The STScI STIS Pipeline V: Cosmic Ray Rejection

    NASA Astrophysics Data System (ADS)

    Baum, Stefi; Hsu, J. C.; Hodge, Phil; Ferguson, Harry

    1996-07-01

    In this ISR we describe calstis-2, the calstis calibration module which combines CRSPLIT exposures to produce a single cosmic ray rejected image. Cosmic ray rejection in the STIS pipeline will follow the same basic philosophy as does the STSDAS task crrej - a series of separate CRSPLIT exposures are combined to produce a single summed image, where discrepant (different by some number of sigma from the guess value) are discarded in forming the output image. The calstis pipeline is able to perform this cosmic ray rejection because the individually commanded exposures are associated together into a single dataset by TRANS and generic conversion. The crrej will also exist as a task in STSDAS to allow users to reperform the cosmic ray rejection, altering the input parameters.

  20. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  1. Modified-hybrid optical neural network filter for multiple object recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.

    2009-08-01

    Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.

  2. Toward magnetic resonance-guided electroanatomical voltage mapping for catheter ablation of scar-related ventricular tachycardia: a comparison of registration methods.

    PubMed

    Tao, Qian; Milles, Julien; VAN Huls VAN Taxis, Carine; Lamb, Hildo J; Reiber, Johan H C; Zeppenfeld, Katja; VAN DER Geest, Rob J

    2012-01-01

    Integration of preprocedural delayed enhanced magnetic resonance imaging (DE-MRI) with electroanatomical voltage mapping (EAVM) may provide additional high-resolution substrate information for catheter ablation of scar-related ventricular tachycardias (VT). Accurate and fast image integration of DE-MRI with EAVM is desirable for MR-guided ablation. Twenty-six VT patients with large transmural scar underwent catheter ablation and preprocedural DE-MRI. With different registration models and EAVM input, 3 image integration methods were evaluated and compared to the commercial registration module CartoMerge. The performance was evaluated both in terms of distance measure that describes surface matching, and correlation measure that describes actual scar correspondence. Compared to CartoMerge, the method that uses the translation-and-rotation model and high-density EAVM input resulted in a registration error of 4.32±0.69 mm as compared to 4.84 ± 1.07 (P <0.05); the method that uses the translation model and high-density EAVM input resulted in a registration error of 4.60 ± 0.65 mm (P = NS); and the method that uses the translation model and a single anatomical landmark input resulted in a registration error of 6.58 ± 1.63 mm (P < 0.05). No significant difference in scar correlation was observed between all 3 methods and CartoMerge (P = NS). During VT ablation procedures, accurate integration of EAVM and DE-MRI can be achieved using a translation registration model and a single anatomical landmark. This model allows for image integration in minimal mapping time and is likely to reduce fluoroscopy time and increase procedure efficacy. © 2011 Wiley Periodicals, Inc.

  3. Single-exposure two-dimensional superresolution in digital holography using a vertical cavity surface-emitting laser source array.

    PubMed

    Granero, Luis; Zalevsky, Zeev; Micó, Vicente

    2011-04-01

    We present a new implementation capable of producing two-dimensional (2D) superresolution (SR) imaging in a single exposure by aperture synthesis in digital lensless Fourier holography when using angular multiplexing provided by a vertical cavity surface-emitting laser source array. The system performs the recording in a single CCD snapshot of a multiplexed hologram coming from the incoherent addition of multiple subholograms, where each contains information about a different 2D spatial frequency band of the object's spectrum. Thus, a set of nonoverlapping bandpass images of the input object can be recovered by Fourier transformation (FT) of the multiplexed hologram. The SR is obtained by coherent addition of the information contained in each bandpass image while generating an enlarged synthetic aperture. Experimental results demonstrate improvement in resolution and image quality.

  4. Exploiting core knowledge for visual object recognition.

    PubMed

    Schurgin, Mark W; Flombaum, Jonathan I

    2017-03-01

    Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Identification and control of a multizone crystal growth furnace

    NASA Technical Reports Server (NTRS)

    Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.; Singh, N. B.

    1992-01-01

    This paper presents an intelligent adaptive control system for the control of a solid-liquid interface of a crystal while it is growing via directional solidification inside a multizone transparent furnace. The task of the process controller is to establish a user-specified axial temperature profile and to maintain a desirable interface shape. Both single-input-single-output and multi-input-multi-output adaptive pole placement algorithms have been used to control the temperature. Also described is an intelligent measurement system to assess the shape of the crystal while it is growing. A color video imaging system observes the crystal in real time and determines the position and the shape of the interface. This information is used to evaluate the crystal growth rate, and to analyze the effects of translational velocity and temperature profiles on the shape of the interface. Creation of this knowledge base is the first step to incorporate image processing into furnace control.

  6. A Compact 600 GHz Electronically Tunable Vector Measurement System for Submillimeter Wave Imaging

    NASA Technical Reports Server (NTRS)

    Dengler, Robert J.; Maiwald, Frank; Siegel, Peter H.

    2006-01-01

    A compact submillimeter wave transmission / reflection measurement system has been demonstrated at 560-635 GHz, with electronic tuning over the entire band. Maximum dynamic range measured at a single frequency is 90 dB (60 dB typical), and phase noise is less than +/- 2(deg). By using a frequency steerable lens at the source output and mixer input, the frequency agility of the system can be used to scan the source and receive beams, resulting in near real-time imaging capability using only a single pixel.

  7. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    NASA Astrophysics Data System (ADS)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  8. Focal ratio degradation in lightly fused hexabundles

    NASA Astrophysics Data System (ADS)

    Bryant, J. J.; Bland-Hawthorn, J.; Fogarty, L. M. R.; Lawrence, J. S.; Croom, S. M.

    2014-02-01

    We are now moving into an era where multi-object wide-field surveys, which traditionally use single fibres to observe many targets simultaneously, can exploit compact integral field units (IFUs) in place of single fibres. Current multi-object integral field instruments such as Sydney-AAO Multi-object Integral field spectrograph have driven the development of new imaging fibre bundles (hexabundles) for multi-object spectrographs. We have characterized the performance of hexabundles with different cladding thicknesses and compared them to that of the same type of bare fibre, across the range of fill fractions and input f-ratios likely in an IFU instrument. Hexabundles with 7-cores and 61-cores were tested for focal ratio degradation (FRD), throughput and cross-talk when fed with inputs from F/3.4 to >F/8. The five 7-core bundles have cladding thickness ranging from 1 to 8 μm, and the 61-core bundles have 5 μm cladding. As expected, the FRD improves as the input focal ratio decreases. We find that the FRD and throughput of the cores in the hexabundles match the performance of single fibres of the same material at low input f-ratios. The performance results presented can be used to set a limit on the f-ratio of a system based on the maximum loss allowable for a planned instrument. Our results confirm that hexabundles are a successful alternative for fibre imaging devices for multi-object spectroscopy on wide-field telescopes and have prompted further development of hexabundle designs with hexagonal packing and square cores.

  9. PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.

    PubMed

    Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi

    2017-08-01

    The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.

  10. Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.

    PubMed

    Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung

    2018-04-01

    In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2012-01-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  12. Fiber optic combiner and duplicator

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The investigation of the possible development of two optical devices, one to take two images as inputs and to present their arithmetic sum as a single output, the other to take one image as input and present two identical images as outputs is described. Significant engineering time was invested in establishing precision fiber optics drawing capabilities, real time monitoring of the fiber size and exact measuring of fiber optics ribbons. Various assembly procedures and tooling designs were investigated and prototype models were built and evaluated that established technical assurance that the device was feasible and could be fabricated. Although the interleaver specification in its entirety was not achieved, the techniques developed in the course of the program improved the quality of images transmitted by fiber optic arrays by at least an order of magnitude. These techniques are already being applied to the manufacture of precise fiber optic components.

  13. Spotsizer: High-throughput quantitative analysis of microbial growth.

    PubMed

    Bischof, Leanne; Převorovský, Martin; Rallis, Charalampos; Jeffares, Daniel C; Arzhaeva, Yulia; Bähler, Jürg

    2016-10-01

    Microbial colony growth can serve as a useful readout in assays for studying complex genetic interactions or the effects of chemical compounds. Although computational tools for acquiring quantitative measurements of microbial colonies have been developed, their utility can be compromised by inflexible input image requirements, non-trivial installation procedures, or complicated operation. Here, we present the Spotsizer software tool for automated colony size measurements in images of robotically arrayed microbial colonies. Spotsizer features a convenient graphical user interface (GUI), has both single-image and batch-processing capabilities, and works with multiple input image formats and different colony grid types. We demonstrate how Spotsizer can be used for high-throughput quantitative analysis of fission yeast growth. The user-friendly Spotsizer tool provides rapid, accurate, and robust quantitative analyses of microbial growth in a high-throughput format. Spotsizer is freely available at https://data.csiro.au/dap/landingpage?pid=csiro:15330 under a proprietary CSIRO license.

  14. The possibility of identifying the spatial location of single dislocations by topo-tomography on laboratory setups

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zolotov, D. A., E-mail: zolotovden@crys.ras.ru; Buzmakov, A. V.; Elfimov, D. A.

    2017-01-15

    The spatial arrangement of single linear defects in a Si single crystal (input surface (111)) has been investigated by X-ray topo-tomography using laboratory X-ray sources. The experimental technique and the procedure of reconstructing a 3D image of dislocation half-loops near the Si crystal surface are described. The sizes of observed linear defects with a spatial resolution of about 10 μm are estimated.

  15. Single-Side Two-Location Spotlight Imaging for Building Based on MIMO Through-Wall-Radar.

    PubMed

    Jia, Yong; Zhong, Xiaoling; Liu, Jiangang; Guo, Yong

    2016-09-07

    Through-wall-radar imaging is of interest for mapping the wall layout of buildings and for the detection of stationary targets within buildings. In this paper, we present an easy single-side two-location spotlight imaging method for both wall layout mapping and stationary target detection by utilizing multiple-input multiple-output (MIMO) through-wall-radar. Rather than imaging for building walls directly, the images of all building corners are generated to speculate wall layout indirectly by successively deploying the MIMO through-wall-radar at two appropriate locations on only one side of the building and then carrying out spotlight imaging with two different squint-views. In addition to the ease of implementation, the single-side two-location squint-view detection also has two other advantages for stationary target imaging. The first one is the fewer multi-path ghosts, and the second one is the smaller region of side-lobe interferences from the corner images in comparison to the wall images. Based on Computer Simulation Technology (CST) electromagnetic simulation software, we provide multiple sets of validation results where multiple binary panorama images with clear images of all corners and stationary targets are obtained by combining two single-location images with the use of incoherent additive fusion and two-dimensional cell-averaging constant-false-alarm-rate (2D CA-CFAR) detection.

  16. Positron emission tomography/magnetic resonance hybrid scanner imaging of cerebral blood flow using 15O-water positron emission tomography and arterial spin labeling magnetic resonance imaging in newborn piglets

    PubMed Central

    Andersen, Julie B; Henning, William S; Lindberg, Ulrich; Ladefoged, Claes N; Højgaard, Liselotte; Greisen, Gorm; Law, Ian

    2015-01-01

    Abnormality in cerebral blood flow (CBF) distribution can lead to hypoxic–ischemic cerebral damage in newborn infants. The aim of the study was to investigate minimally invasive approaches to measure CBF by comparing simultaneous 15O-water positron emission tomography (PET) and single TI pulsed arterial spin labeling (ASL) magnetic resonance imaging (MR) on a hybrid PET/MR in seven newborn piglets. Positron emission tomography was performed with IV injections of 20 MBq and 100 MBq 15O-water to confirm CBF reliability at low activity. Cerebral blood flow was quantified using a one-tissue-compartment-model using two input functions: an arterial input function (AIF) or an image-derived input function (IDIF). The mean global CBF (95% CI) PET-AIF, PET-IDIF, and ASL at baseline were 27 (23; 32), 34 (31; 37), and 27 (22; 32) mL/100 g per minute, respectively. At acetazolamide stimulus, PET-AIF, PET-IDIF, and ASL were 64 (55; 74), 76 (70; 83) and 79 (67; 92) mL/100 g per minute, respectively. At baseline, differences between PET-AIF, PET-IDIF, and ASL were 22% (P<0.0001) and −0.7% (P=0.9). At acetazolamide, differences between PET-AIF, PET-IDIF, and ASL were 19% (P=0.001) and 24% (P=0.0003). In conclusion, PET-IDIF overestimated CBF. Injected activity of 20 MBq 15O-water had acceptable concordance with 100 MBq, without compromising image quality. Single TI ASL was questionable for regional CBF measurements. Global ASL CBF and PET CBF were congruent during baseline but not during hyperperfusion. PMID:26058699

  17. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    PubMed Central

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  18. Multi-flux-transformer MRI detection with an atomic magnetometer.

    PubMed

    Savukov, Igor; Karaulanov, Todor

    2014-12-01

    Recently, anatomical ultra-low field (ULF) MRI has been demonstrated with an atomic magnetometer (AM). A flux-transformer (FT) has been used for decoupling MRI fields and gradients to avoid their negative effects on AM performance. The field of view (FOV) was limited because of the need to compromise between the size of the FT input coil and MRI sensitivity per voxel. Multi-channel acquisition is a well-known solution to increase FOV without significantly reducing sensitivity. In this paper, we demonstrate twofold FOV increase with the use of three FT input coils. We also show that it is possible to use a single atomic magnetometer and single acquisition channel to acquire three independent MRI signals by applying a frequency-encoding gradient along the direction of the detection array span. The approach can be generalized to more channels and can be critical for imaging applications of non-cryogenic ULF MRI where FOV needs to be large, including head, hand, spine, and whole-body imaging. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Multi-flux-transformer MRI detection with an atomic magnetometer

    PubMed Central

    Savukov, Igor; Karaulanov, Todor

    2014-01-01

    Recently, anatomical ultra-low field (ULF) MRI has been demonstrated with an atomic magnetometer (AM). A flux-transformer (FT) has been used for decoupling MRI fields and gradients to avoid their negative effects on AM performance. The field of view (FOV) was limited because of the need to compromise between the size of the FT input coil and MRI sensitivity per voxel. Multi-channel acquisition is a well-known solution to increase FOV without significantly reducing sensitivity. In this paper, we demonstrate two-fold FOV increase with the use of three FT input coils. We also show that it is possible to use a single atomic magnetometer and single acquisition channel to acquire three independent MRI signals by applying a frequency-encoding gradient along the direction of the detection array span. The approach can be generalized to more channels and can be critical for imaging applications of non-cryogenic ULF MRI where FOV needs to be large, including head, hand, spine, and whole-body imaging. PMID:25462946

  20. A method to synchronize signals from multiple patient monitoring devices through a single input channel for inclusion in list-mode acquisitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, J. Michael; Pretorius, P. Hendrik; Johnson, Karen

    2013-12-15

    Purpose: This technical note documents a method that the authors developed for combining a signal to synchronize a patient-monitoring device with a second physiological signal for inclusion into list-mode acquisition. Our specific application requires synchronizing an external patient motion-tracking system with a medical imaging system by multiplexing the tracking input with the ECG input. The authors believe that their methodology can be adapted for use in a variety of medical imaging modalities including single photon emission computed tomography (SPECT) and positron emission tomography (PET). Methods: The authors insert a unique pulse sequence into a single physiological input channel. This sequencemore » is then recorded in the list-mode acquisition along with the R-wave pulse used for ECG gating. The specific form of our pulse sequence allows for recognition of the time point being synchronized even when portions of the pulse sequence are lost due to collisions with R-wave pulses. This was achieved by altering our software used in binning the list-mode data to recognize even a portion of our pulse sequence. Limitations on heart rates at which our pulse sequence could be reliably detected were investigated by simulating the mixing of the two signals as a function of heart rate and time point during the cardiac cycle at which our pulse sequence is mixed with the cardiac signal. Results: The authors have successfully achieved accurate temporal synchronization of our motion-tracking system with acquisition of SPECT projections used in 17 recent clinical research cases. In our simulation analysis the authors determined that synchronization to enable compensation for body and respiratory motion could be achieved for heart rates up to 125 beats-per-minute (bpm). Conclusions: Synchronization of list-mode acquisition with external patient monitoring devices such as those employed in motion-tracking can reliably be achieved using a simple method that can be implemented using minimal external hardware and software modification through a single input channel, while still recording cardiac gating signals.« less

  1. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  2. Phase-and-amplitude recovery from a single phase-contrast image using partially spatially coherent x-ray radiation

    NASA Astrophysics Data System (ADS)

    Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele

    2018-05-01

    A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.

  3. Depth map generation using a single image sensor with phase masks.

    PubMed

    Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki

    2016-06-13

    Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.

  4. Simulation of speckle patterns with pre-defined correlation distributions.

    PubMed

    Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S

    2016-03-01

    We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques.

  5. Simulation of speckle patterns with pre-defined correlation distributions

    PubMed Central

    Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S.

    2016-01-01

    We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques. PMID:27231589

  6. Fiber-optic polarization diversity detection for rotary probe optical coherence tomography.

    PubMed

    Lee, Anthony M D; Pahlevaninezhad, Hamid; Yang, Victor X D; Lam, Stephen; MacAulay, Calum; Lane, Pierre

    2014-06-15

    We report a polarization diversity detection scheme for optical coherence tomography with a new, custom, miniaturized fiber coupler with single mode (SM) fiber inputs and polarization maintaining (PM) fiber outputs. The SM fiber inputs obviate matching the optical lengths of the X and Y OCT polarization channels prior to interference and the PM fiber outputs ensure defined X and Y axes after interference. Advantages for this scheme include easier alignment, lower cost, and easier miniaturization compared to designs with free-space bulk optical components. We demonstrate the utility of the detection system to mitigate the effects of rapidly changing polarization states when imaging with rotating fiber optic probes in Intralipid suspension and during in vivo imaging of human airways.

  7. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  8. Progressive multi-atlas label fusion by dictionary evolution.

    PubMed

    Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang

    2017-02-01

    Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Fuzzy logic particle tracking velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1993-01-01

    Fuzzy logic has proven to be a simple and robust method for process control. Instead of requiring a complex model of the system, a user defined rule base is used to control the process. In this paper the principles of fuzzy logic control are applied to Particle Tracking Velocimetry (PTV). Two frames of digitally recorded, single exposure particle imagery are used as input. The fuzzy processor uses the local particle displacement information to determine the correct particle tracks. Fuzzy PTV is an improvement over traditional PTV techniques which typically require a sequence (greater than 2) of image frames for accurately tracking particles. The fuzzy processor executes in software on a PC without the use of specialized array or fuzzy logic processors. A pair of sample input images with roughly 300 particle images each, results in more than 200 velocity vectors in under 8 seconds of processing time.

  10. Non Contacting Evaluation of Strains and Cracking Using Optical and Infrared Imaging Techniques

    DTIC Science & Technology

    1988-08-22

    Compatible Zenith Z-386 microcomputer with plotter II. 3-D Motion Measurinq System 1. Complete OPTOTRAK three dimensional digitizing system. System includes...acquisition unit - 16 single ended analog input channels 3. Data Analysis Package software (KINEPLOT) 4. Extra OPTOTRAK Camera (max 224 per system

  11. Multi-Layered Feedforward Neural Networks for Image Segmentation

    DTIC Science & Technology

    1991-12-01

    the Gram-Schmidt Network ...................... 80 xi Preface WILLIAM SHAKESPEARE 1564-1616 Is this a dagger which I see before me, The handle toward...any input-output mapping with a single hidden layer of non-linear nodes, the result may be like proving that a monkey could write Hamlet . Certainly it

  12. Single image super-resolution reconstruction algorithm based on eage selection

    NASA Astrophysics Data System (ADS)

    Zhang, Yaolan; Liu, Yijun

    2017-05-01

    Super-resolution (SR) has become more important, because it can generate high-quality high-resolution (HR) images from low-resolution (LR) input images. At present, there are a lot of work is concentrated on developing sophisticated image priors to improve the image quality, while taking much less attention to estimating and incorporating the blur model that can also impact the reconstruction results. We present a new reconstruction method based on eager selection. This method takes full account of the factors that affect the blur kernel estimation and accurately estimating the blur process. When comparing with the state-of-the-art methods, our method has comparable performance.

  13. Information retrieval based on single-pixel optical imaging with quick-response code

    NASA Astrophysics Data System (ADS)

    Xiao, Yin; Chen, Wen

    2018-04-01

    Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.

  14. A Simulation Model Of A Picture Archival And Communication System

    NASA Astrophysics Data System (ADS)

    D'Silva, Vijay; Perros, Harry; Stockbridge, Chris

    1988-06-01

    A PACS architecture was simulated to quantify its performance. The model consisted of reading stations, acquisition nodes, communication links, a database management system, and a storage system consisting of magnetic and optical disks. Two levels of storage were simulated, a high-speed magnetic disk system for short term storage, and optical disk jukeboxes for long term storage. The communications link was a single bus via which image data were requested and delivered. Real input data to the simulation model were obtained from surveys of radiology procedures (Bowman Gray School of Medicine). From these the following inputs were calculated: - the size of short term storage necessary - the amount of long term storage required - the frequency of access of each store, and - the distribution of the number of films requested per diagnosis. The performance measures obtained were - the mean retrieval time for an image, - mean queue lengths, and - the utilization of each device. Parametric analysis was done for - the bus speed, - the packet size for the communications link, - the record size on the magnetic disk, - compression ratio, - influx of new images, - DBMS time, and - diagnosis think times. Plots give the optimum values for those values of input speed and device performance which are sufficient to achieve subsecond image retrieval times

  15. Multifunction Imaging and Spectroscopic Instrument

    NASA Technical Reports Server (NTRS)

    Mouroulis, Pantazis

    2004-01-01

    A proposed optoelectronic instrument would perform several different spectroscopic and imaging functions that, heretofore, have been performed by separate instruments. The functions would be reflectance, fluorescence, and Raman spectroscopies; variable-color confocal imaging at two different resolutions; and wide-field color imaging. The instrument was conceived for use in examination of minerals on remote planets. It could also be used on Earth to characterize material specimens. The conceptual design of the instrument emphasizes compactness and economy, to be achieved largely through sharing of components among subsystems that perform different imaging and spectrometric functions. The input optics for the various functions would be mounted in a single optical head. With the exception of a targeting lens, the input optics would all be aimed at the same spot on a specimen, thereby both (1) eliminating the need to reposition the specimen to perform different imaging and/or spectroscopic observations and (2) ensuring that data from such observations can be correlated with respect to known positions on the specimen. The figure schematically depicts the principal components and subsystems of the instrument. The targeting lens would collect light into a multimode optical fiber, which would guide the light through a fiber-selection switch to a reflection/ fluorescence spectrometer. The switch would have four positions, enabling selection of spectrometer input from the targeting lens, from either of one or two multimode optical fibers coming from a reflectance/fluorescence- microspectrometer optical head, or from a dark calibration position (no fiber). The switch would be the only moving part within the instrument.

  16. Functional transformations of odor inputs in the mouse olfactory bulb.

    PubMed

    Adam, Yoav; Livneh, Yoav; Miyamichi, Kazunari; Groysman, Maya; Luo, Liqun; Mizrahi, Adi

    2014-01-01

    Sensory inputs from the nasal epithelium to the olfactory bulb (OB) are organized as a discrete map in the glomerular layer (GL). This map is then modulated by distinct types of local neurons and transmitted to higher brain areas via mitral and tufted cells. Little is known about the functional organization of the circuits downstream of glomeruli. We used in vivo two-photon calcium imaging for large scale functional mapping of distinct neuronal populations in the mouse OB, at single cell resolution. Specifically, we imaged odor responses of mitral cells (MCs), tufted cells (TCs) and glomerular interneurons (GL-INs). Mitral cells population activity was heterogeneous and only mildly correlated with the olfactory receptor neuron (ORN) inputs, supporting the view that discrete input maps undergo significant transformations at the output level of the OB. In contrast, population activity profiles of TCs were dense, and highly correlated with the odor inputs in both space and time. Glomerular interneurons were also highly correlated with the ORN inputs, but showed higher activation thresholds suggesting that these neurons are driven by strongly activated glomeruli. Temporally, upon persistent odor exposure, TCs quickly adapted. In contrast, both MCs and GL-INs showed diverse temporal response patterns, suggesting that GL-INs could contribute to the transformations MCs undergo at slow time scales. Our data suggest that sensory odor maps are transformed by TCs and MCs in different ways forming two distinct and parallel information streams.

  17. Comparison of gesture and conventional interaction techniques for interventional neuroradiology.

    PubMed

    Hettig, Julian; Saalfeld, Patrick; Luz, Maria; Becker, Mathias; Skalej, Martin; Hansen, Christian

    2017-09-01

    Interaction with radiological image data and volume renderings within a sterile environment is a challenging task. Clinically established methods such as joystick control and task delegation can be time-consuming and error-prone and interrupt the workflow. New touchless input modalities may have the potential to overcome these limitations, but their value compared to established methods is unclear. We present a comparative evaluation to analyze the value of two gesture input modalities (Myo Gesture Control Armband and Leap Motion Controller) versus two clinically established methods (task delegation and joystick control). A user study was conducted with ten experienced radiologists by simulating a diagnostic neuroradiological vascular treatment with two frequently used interaction tasks in an experimental operating room. The input modalities were assessed using task completion time, perceived task difficulty, and subjective workload. Overall, the clinically established method of task delegation performed best under the study conditions. In general, gesture control failed to exceed the clinical input approach. However, the Myo Gesture Control Armband showed a potential for simple image selection task. Novel input modalities have the potential to take over single tasks more efficiently than clinically established methods. The results of our user study show the relevance of task characteristics such as task complexity on performance with specific input modalities. Accordingly, future work should consider task characteristics to provide a useful gesture interface for a specific use case instead of an all-in-one solution.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chao

    Sparx, a new environment for Cryo-EM image processing; Cryo-EM, Single particle reconstruction, principal component analysis; Hardware Req.: PC, MAC, Supercomputer, Mainframe, Multiplatform, Workstation. Software Req.: operating system is Unix; Compiler C++; type of files: source code, object library, executable modules, compilation instructions; sample problem input data. Location/transmission: http://sparx-em.org; User manual & paper: http://sparx-em.org;

  19. Minimal Power Latch for Single-Slope ADCs

    NASA Technical Reports Server (NTRS)

    Hancock, Bruce R.

    2013-01-01

    Column-parallel analog-to-digital converters (ADCs) for imagers involve simultaneous operation of many ADCs. Single-slope ADCs are well adapted to this use because of their simplicity. Each ADC contains a comparator, comparing its input signal level to an increasing reference signal (ramp). When the ramp is equal to the input, the comparator triggers a latch that captures an encoded counter value (code). Knowing the captured code, the ramp value and hence the input signal are determined. In a column-parallel ADC, each column contains only the comparator and the latches; the ramp and code generation are shared. In conventional latch or flip-flop circuits, there is an input stage that tracks the input signal, and this stage consumes switching current every time the input changes. With many columns, many bits, and high code rates, this switching current can be substantial. It will also generate noise that may corrupt the analog signals. A latch was designed that does not track the input, and consumes power only at the instant of latching the data value. The circuit consists of two S-R (set-reset) latches, gated by the comparator. One is set by high data values and the other by low data values. The latches are cross-coupled so that the first one to set blocks the other. In order that the input data not need an inversion, which would consume power, the two latches are made in complementary polarity. This requires complementary gates from the comparator, instead of complementary data values, but the comparator only triggers once per conversion, and usually has complementary outputs to begin with. An efficient CMOS (complementary metal oxide semiconductor) implementation of this circuit is shown in the figure, where C is the comparator output, D is the data (code), and Q0 and Q1 are the outputs indicating the capture of a zero or one value. The latch for Q0 has a negative-true set signal and output, and is implemented using OR-AND-INVERT logic, while the latch for Q1 uses positive- true signals and is implemented using AND-OR-INVERT logic. In this implementation, both latches are cleared when the comparator is reset. Two redundant transistors are removed from the reset side of each latch, making for a compact layout. CMOS imagers with column-parallel ADCs have demonstrated high performance for remote sensing applications. With this latch circuit, the power consumption and noise can be further reduced. This innovation can be used in CMOS imagers and very-low-power electronics

  20. Image enhancement by non-linear extrapolation in frequency space

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)

    1998-01-01

    An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.

  1. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  2. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  3. A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity

    PubMed Central

    Lomp, Oliver; Faubel, Christian; Schöner, Gregor

    2017-01-01

    Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145

  4. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  5. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  6. Collaborative Research and Development (CR&D) III Task Order 0090: Image Processing Framework: From Acquisition and Analysis to Archival Storage

    DTIC Science & Technology

    2013-05-01

    contract or a PhD di sse rtation typically are a " proo f- of-concept" code base that can onl y read a single set of inputs and are not designed ...AFRL-RX-WP-TR-2013-0210 COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) III Task Order 0090: Image Processing Framework: From...public release; distribution unlimited. See additional restrictions described on inside pages. STINFO COPY AIR FORCE RESEARCH LABORATORY

  7. Cortical Merging in S1 as a Substrate for Tactile Input Grouping

    PubMed Central

    Zennou-Azogui, Yoh’I; Xerri, Christian

    2018-01-01

    Abstract Perception is a reconstruction process guided by rules based on knowledge about the world. Little is known about the neural implementation of the rules of object formation in the tactile sensory system. When two close tactile stimuli are delivered simultaneously on the skin, subjects feel a unique sensation, spatially centered between the two stimuli. Voltage-sensitive dye imaging (VSDi) and electrophysiological recordings [local field potentials (LFPs) and single units] were used to extract the cortical representation of two-point tactile stimuli in the primary somatosensory cortex of anesthetized Long-Evans rats. Although layer 4 LFP responses to brief costimulation of the distal region of two digits resembled the sum of individual responses, approximately one-third of single units demonstrated merging-compatible changes. In contrast to previous intrinsic optical imaging studies, VSD activations reflecting layer 2/3 activity were centered between the representations of the digits stimulated alone. This merging was found for every tested distance between the stimulated digits. We discuss this laminar difference as evidence that merging occurs through a buildup stream and depends on the superposition of inputs, which increases with successive stages of sensory processing. These findings show that layers 2/3 are involved in the grouping of sensory inputs. This process that could be inscribed in the cortical computing routine and network organization is likely to promote object formation and implement perception rules. PMID:29354679

  8. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.

  9. Single-shot dual-wavelength in-line and off-axis hybrid digital holography

    NASA Astrophysics Data System (ADS)

    Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie

    2018-02-01

    We propose an in-line and off-axis hybrid holographic real-time imaging technique. The in-line and off-axis digital holograms are generated simultaneously by two lasers with different wavelengths, and they are recorded using a color camera with a single shot. The reconstruction is carried using an iterative algorithm in which the initial input is designed to include the intensity of the in-line hologram and the approximate phase distributions obtained from the off-axis hologram. In this way, the complex field in the object plane and the output by the iterative procedure can produce higher quality amplitude and phase images compared to traditional iterative phase retrieval. The performance of the technique has been demonstrated by acquiring the amplitude and phase images of a green lacewing's wing and a living moon jellyfish.

  10. Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images

    NASA Astrophysics Data System (ADS)

    Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2016-10-01

    Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.

  11. Medical Image Intensifier In 1980 (What Really Happened)

    NASA Astrophysics Data System (ADS)

    Baiter, Stephen; Kuhl, Walter

    1980-08-01

    In 1972, at the first SPIE seminar covering the application of optical instrumentation in medicine, Balter and Stanton presented a paper forecasting the status of x-ray image intensifiers in the year 1980. Now, eight years later, it is 1980, and it seems a good idea to evaluate these forecasts in the light of what has actually happened. The x-ray sensitive image intensifier tube (with cesium iodide as an input phosphor) is used nearly universally. Input screen sizes range from 15 cm to 36 cm in diameter. Real time monitoring of both fluoroscopic and fluorographic examinations is generally performed via closed circuit television. Archival recording of images is carried out using cameras with film formats of approximately 100 mm for single exposure or serial fluorography and 35 mm for cine fluorography. With the detective quantum efficiency of image intensifier tubes remaining near 50% throughout the decade, the noise content of most fluorographic and fluoroscopic images is still determined by the input exposure. Consequently, patient doses today, in 1980, have not substantially changed in the last ten years. There is, however, interest in uncoupling the x-ray dose and the image brightness by providing a variable optical diaphragm between the output of the image intensifier tube and the recording devices. During the past eight years, there has been a major philosophical change in the approach to imaging systems. It is now realized that medical image quality is much more dependent on the reduction of large area contrast losses than on the limiting resolution of the imaging system. It has also been clear that much diagnostic information is carried by spatial frequencies in the neighborhood of one line pair per millimeter (referred to the patient). The design of modern image intensifiers has been directed toward improvement in the large area contrast by minimizing x-ray and optical scatter in both the image intensifier tube and its associated components.

  12. Two Pathways to Stimulus Encoding in Category Learning?

    PubMed Central

    Davis, Tyler; Love, Bradley C.; Maddox, W. Todd

    2008-01-01

    Category learning theorists tacitly assume that stimuli are encoded by a single pathway. Motivated by theories of object recognition, we evaluate a dual-pathway account of stimulus encoding. The part-based pathway establishes mappings between sensory input and symbols that encode discrete stimulus features, whereas the image-based pathway applies holistic templates to sensory input. Our experiments use rule-plus-exception structures in which one exception item in each category violates a salient regularity and must be distinguished from other items. In Experiment 1, we find that discrete representations are crucial for recognition of exceptions following brief training. Experiments 2 and 3 involve multi-session training regimens designed to encourage either part or image-based encoding. We find that both pathways are able to support exception encoding, but have unique characteristics. We speculate that one advantage of the part-based pathway is the ability to generalize across domains, whereas the image-based pathway provides faster and more effortless recognition. PMID:19460948

  13. Solid-state Image Sensor with Focal-plane Digital Photon-counting Pixel Array

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Pain, Bedabrata

    1997-01-01

    A solid-state focal-plane imaging system comprises an NxN array of high gain. low-noise unit cells. each unit cell being connected to a different one of photovoltaic detector diodes, one for each unit cell, interspersed in the array for ultra low level image detection and a plurality of digital counters coupled to the outputs of the unit cell by a multiplexer(either a separate counter for each unit cell or a row of N of counters time shared with N rows of digital counters). Each unit cell includes two self-biasing cascode amplifiers in cascade for a high charge-to-voltage conversion gain (greater than 1mV/e(-)) and an electronic switch to reset input capacitance to a reference potential in order to be able to discriminate detection of an incident photon by the photoelectron (e(-))generated in the detector diode at the input of the first cascode amplifier in order to count incident photons individually in a digital counter connected to the output of the second cascade amplifier. Reseting the input capacitance and initiating self-biasing of the amplifiers occurs every clock cycle of an integratng period to enable ultralow light level image detection by the may of photovoltaic detector diodes under such ultralow light level conditions that the photon flux will statistically provide only a single photon at a time incident on anyone detector diode during any clock cycle.

  14. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  15. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  16. Experiments on sparsity assisted phase retrieval of phase objects

    NASA Astrophysics Data System (ADS)

    Gaur, Charu; Lochab, Priyanka; Khare, Kedar

    2017-05-01

    Iterative phase retrieval algorithms such as the Gerchberg-Saxton method and the Fienup hybrid input-output method are known to suffer from the twin image stagnation problem, particularly when the solution to be recovered is complex valued and has centrosymmetric support. Recently we showed that the twin image stagnation problem can be addressed using image sparsity ideas (Gaur et al 2015 J. Opt. Soc. Am. A 32 1922). In this work we test this sparsity assisted phase retrieval method with experimental single shot Fourier transform intensity data frames corresponding to phase objects displayed on a spatial light modulator. The standard iterative phase retrieval algorithms are combined with an image sparsity based penalty in an adaptive manner. Illustrations for both binary and continuous phase objects are provided. It is observed that image sparsity constraint has an important role to play in obtaining meaningful phase recovery without encountering the well-known stagnation problems. The results are valuable for enabling single shot coherent diffraction imaging of phase objects for applications involving illumination wavelengths over a wide range of electromagnetic spectrum.

  17. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  18. Robust image retrieval from noisy inputs using lattice associative memories

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Nieves-V., José Angel; García-A., Anmi; Valdiviezo-N., Juan Carlos

    2009-02-01

    Lattice associative memories also known as morphological associative memories are fully connected feedforward neural networks with no hidden layers, whose computation at each node is carried out with lattice algebra operations. These networks are a relatively recent development in the field of associative memories that has proven to be an alternative way to work with sets of pattern pairs for which the storage and retrieval stages use minimax algebra. Different associative memory models have been proposed to cope with the problem of pattern recall under input degradations, such as occlusions or random noise, where input patterns can be composed of binary or real valued entries. In comparison to these and other artificial neural network memories, lattice algebra based memories display better performance for storage and recall capability; however, the computational techniques devised to achieve that purpose require additional processing or provide partial success when inputs are presented with undetermined noise levels. Robust retrieval capability of an associative memory model is usually expressed by a high percentage of perfect recalls from non-perfect input. The procedure described here uses noise masking defined by simple lattice operations together with appropriate metrics, such as the normalized mean squared error or signal to noise ratio, to boost the recall performance of either the min or max lattice auto-associative memories. Using a single lattice associative memory, illustrative examples are given that demonstrate the enhanced retrieval of correct gray-scale image associations from inputs corrupted with random noise.

  19. Deletion of Ten-m3 Induces the Formation of Eye Dominance Domains in Mouse Visual Cortex

    PubMed Central

    Merlin, Sam; Horng, Sam; Marotte, Lauren R.; Sur, Mriganka; Sawatari, Atomu

    2013-01-01

    The visual system is characterized by precise retinotopic mapping of each eye, together with exquisitely matched binocular projections. In many species, the inputs that represent the eyes are segregated into ocular dominance columns in primary visual cortex (V1), whereas in rodents, this does not occur. Ten-m3, a member of the Ten-m/Odz/Teneurin family, regulates axonal guidance in the retinogeniculate pathway. Significantly, ipsilateral projections are expanded in the dorsal lateral geniculate nucleus and are not aligned with contralateral projections in Ten-m3 knockout (KO) mice. Here, we demonstrate the impact of altered retinogeniculate mapping on the organization and function of V1. Transneuronal tracing and c-fos immunohistochemistry demonstrate that the subcortical expansion of ipsilateral input is conveyed to V1 in Ten-m3 KOs: Ipsilateral inputs are widely distributed across V1 and are interdigitated with contralateral inputs into eye dominance domains. Segregation is confirmed by optical imaging of intrinsic signals. Single-unit recording shows ipsilateral, and contralateral inputs are mismatched at the level of single V1 neurons, and binocular stimulation leads to functional suppression of these cells. These findings indicate that the medial expansion of the binocular zone together with an interocular mismatch is sufficient to induce novel structural features, such as eye dominance domains in rodent visual cortex. PMID:22499796

  20. Image scale measurement with correlation filters in a volume holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2013-08-01

    A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.

  1. A general framework for face reconstruction using single still image based on 2D-to-3D transformation kernel.

    PubMed

    Fooprateepsiri, Rerkchai; Kurutach, Werasak

    2014-03-01

    Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images are different than the testing image. The methods in this paper are designed to improve the accuracy of a features-based face recognition system when the pose between the input images and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Second, realistic virtual faces with different poses are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results, we conclude that the proposed method improves the accuracy of face recognition by varying the pose, illumination and expression. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Image-derived input function with factor analysis and a-priori information.

    PubMed

    Simončič, Urban; Zanotti-Fregonara, Paolo

    2015-02-01

    Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.

  3. Minimal Power Latch for Single-Slope ADCs

    NASA Technical Reports Server (NTRS)

    Hancock, Bruce R. (Inventor)

    2015-01-01

    A latch circuit that uses two interoperating latches. The latch circuit has the beneficial feature that it switches only a single time during a measurement that uses a stair step or ramp function as an input signal in an analog to digital converter. This feature minimizes the amount of power that is consumed in the latch and also minimizes the amount of high frequency noise that is generated by the latch. An application using a plurality of such latch circuits in a parallel decoding ADC for use in an image sensor is given as an example.

  4. An improved artifact removal in exposure fusion with local linear constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Hai; Yu, Mali

    2018-04-01

    In exposure fusion, it is challenging to remove artifacts because of camera motion and moving objects in the scene. An improved artifact removal method is proposed in this paper, which performs local linear adjustment in artifact removal progress. After determining a reference image, we first perform high-dynamic-range (HDR) deghosting to generate an intermediate image stack from the input image stack. Then, a linear Intensity Mapping Function (IMF) in each window is extracted based on the intensities of intermediate image and reference image, the intensity mean and variance of reference image. Finally, with the extracted local linear constraints, we reconstruct a target image stack, which can be directly used for fusing a single HDR-like image. Some experiments have been implemented and experimental results demonstrate that the proposed method is robust and effective in removing artifacts especially in the saturated regions of the reference image.

  5. Quantitative Image Restoration in Bright Field Optical Microscopy.

    PubMed

    Gutiérrez-Medina, Braulio; Sánchez Miranda, Manuel de Jesús

    2017-11-07

    Bright field (BF) optical microscopy is regarded as a poor method to observe unstained biological samples due to intrinsic low image contrast. We introduce quantitative image restoration in bright field (QRBF), a digital image processing method that restores out-of-focus BF images of unstained cells. Our procedure is based on deconvolution, using a point spread function modeled from theory. By comparing with reference images of bacteria observed in fluorescence, we show that QRBF faithfully recovers shape and enables quantify size of individual cells, even from a single input image. We applied QRBF in a high-throughput image cytometer to assess shape changes in Escherichia coli during hyperosmotic shock, finding size heterogeneity. We demonstrate that QRBF is also applicable to eukaryotic cells (yeast). Altogether, digital restoration emerges as a straightforward alternative to methods designed to generate contrast in BF imaging for quantitative analysis. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Full-color stereoscopic single-pixel camera based on DMD technology

    NASA Astrophysics Data System (ADS)

    Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús

    2017-02-01

    Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.

  7. Image super-resolution via sparse representation.

    PubMed

    Yang, Jianchao; Wright, John; Huang, Thomas S; Ma, Yi

    2010-11-01

    This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework.

  8. Classification of footwear outsole patterns using Fourier transform and local interest points.

    PubMed

    Richetelli, Nicole; Lee, Mackenzie C; Lasky, Carleen A; Gump, Madison E; Speir, Jacqueline A

    2017-06-01

    Successful classification of questioned footwear has tremendous evidentiary value; the result can minimize the potential suspect pool and link a suspect to a victim, a crime scene, or even multiple crime scenes to each other. With this in mind, several different automated and semi-automated classification models have been applied to the forensic footwear recognition problem, with superior performance commonly associated with two different approaches: correlation of image power (magnitude) or phase, and the use of local interest points transformed using the Scale Invariant Feature Transform (SIFT) and compared using Random Sample Consensus (RANSAC). Despite the distinction associated with each of these methods, all three have not been cross-compared using a single dataset, of limited quality (i.e., characteristic of crime scene-like imagery), and created using a wide combination of image inputs. To address this question, the research presented here examines the classification performance of the Fourier-Mellin transform (FMT), phase-only correlation (POC), and local interest points (transformed using SIFT and compared using RANSAC), as a function of inputs that include mixed media (blood and dust), transfer mechanisms (gel lifters), enhancement techniques (digital and chemical) and variations in print substrate (ceramic tiles, vinyl tiles and paper). Results indicate that POC outperforms both FMT and SIFT+RANSAC, regardless of image input (type, quality and totality), and that the difference in stochastic dominance detected for POC is significant across all image comparison scenarios evaluated in this study. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Retrieval of Sentence Sequences for an Image Stream via Coherence Recurrent Convolutional Networks.

    PubMed

    Park, Cesc Chunseong; Kim, Youngjin; Kim, Gunhee

    2018-04-01

    We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. For retrieving a coherent flow of multiple sentences for a photo stream, we propose a multimodal neural architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22 K unique blog posts with 170 K associated images for the travel topics of NYC, Disneyland , Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning methods for text sequence generation, using both quantitative measures and user studies via Amazon Mechanical Turk.

  10. Supervised learning of tools for content-based search of image databases

    NASA Astrophysics Data System (ADS)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  11. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  12. Identifying the arterial input function from dynamic contrast-enhanced magnetic resonance images using an apex-seeking technique

    NASA Astrophysics Data System (ADS)

    Martel, Anne L.

    2004-04-01

    In order to extract quantitative information from dynamic contrast-enhanced MR images (DCE-MRI) it is usually necessary to identify an arterial input function. This is not a trivial problem if there are no major vessels present in the field of view. Most existing techniques rely on operator intervention or use various curve parameters to identify suitable pixels but these are often specific to the anatomical region or the acquisition method used. They also require the signal from several pixels to be averaged in order to improve the signal to noise ratio, however this introduces errors due to partial volume effects. We have described previously how factor analysis can be used to automatically separate arterial and venous components from DCE-MRI studies of the brain but although that method works well for single slice images through the brain when the blood brain barrier technique is intact, it runs into problems for multi-slice images with more complex dynamics. This paper will describe a factor analysis method that is more robust in such situations and is relatively insensitive to the number of physiological components present in the data set. The technique is very similar to that used to identify spectral end-members from multispectral remote sensing images.

  13. Interactive High-Relief Reconstruction for Organic and Double-Sided Objects from a Photo.

    PubMed

    Yeh, Chih-Kuo; Huang, Shi-Yang; Jayaraman, Pradeep Kumar; Fu, Chi-Wing; Lee, Tong-Yee

    2017-07-01

    We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.

  14. A coarse-to-fine approach for medical hyperspectral image classification with sparse representation

    NASA Astrophysics Data System (ADS)

    Chang, Lan; Zhang, Mengmeng; Li, Wei

    2017-10-01

    A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.

  15. Parallel Information Processing (Image Transmission Via Fiber Bundle and Multimode Fiber

    NASA Technical Reports Server (NTRS)

    Kukhtarev, Nicholai

    2003-01-01

    Growing demand for visual, user-friendly representation of information inspires search for the new methods of image transmission. Currently used in-series (sequential) methods of information processing are inherently slow and are designed mainly for transmission of one or two dimensional arrays of data. Conventional transmission of data by fibers requires many fibers with array of laser diodes and photodetectors. In practice, fiber bundles are also used for transmission of images. Image is formed on the fiber-optic bundle entrance surface and each fiber transmits the incident image to the exit surface. Since the fibers do not preserve phase, only 2D intensity distribution can be transmitted in this way. Each single mode fiber transmit only one pixel of an image. Multimode fibers may be also used, so that each mode represent different pixel element. Direct transmission of image through multimode fiber is hindered by the mode scrambling and phase randomization. To overcome these obstacles wavelength and time-division multiplexing have been used, with each pixel transmitted on a separate wavelength or time interval. Phase-conjugate techniques also was tested in, but only in the unpractical scheme when reconstructed image return back to the fiber input end. Another method of three-dimensional imaging over single mode fibers was demonstrated in, using laser light of reduced spatial coherence. Coherence encoding, needed for a transmission of images by this methods, was realized with grating interferometer or with the help of an acousto-optic deflector. We suggest simple practical holographic method of image transmission over single multimode fiber or over fiber bundle with coherent light using filtering by holographic optical elements. Originally this method was successfully tested for the single multimode fiber. In this research we have modified holographic method for transmission of laser illuminated images over commercially available fiber bundle (fiber endoscope, or fiberscope).

  16. Asynchronous transfer mode distribution network by use of an optoelectronic VLSI switching chip.

    PubMed

    Lentine, A L; Reiley, D J; Novotny, R A; Morrison, R L; Sasian, J M; Beckman, M G; Buchholz, D B; Hinterlong, S J; Cloonan, T J; Richards, G W; McCormick, F B

    1997-03-10

    We describe a new optoelectronic switching system demonstration that implements part of the distribution fabric for a large asynchronous transfer mode (ATM) switch. The system uses a single optoelectronic VLSI modulator-based switching chip with more than 4000 optical input-outputs. The optical system images the input fibers from a two-dimensional fiber bundle onto this chip. A new optomechanical design allows the system to be mounted in a standard electronic equipment frame. A large section of the switch was operated as a 208-Mbits/s time-multiplexed space switch, which can serve as part of an ATM switch by use of an appropriate out-of-band controller. A larger section with 896 input light beams and 256 output beams was operated at 160 Mbits/s as a slowly reconfigurable space switch.

  17. Automated search and retrieval of information from imaged documents using optical correlation techniques

    NASA Astrophysics Data System (ADS)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-10-01

    Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited; e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.

  18. Quantitative Imaging of Single Unstained Magnetotactic Bacteria by Coherent X-ray Diffraction Microscopy.

    PubMed

    Fan, Jiadong; Sun, Zhibin; Zhang, Jian; Huang, Qingjie; Yao, Shengkun; Zong, Yunbing; Kohmura, Yoshiki; Ishikawa, Tetsuya; Liu, Hong; Jiang, Huaidong

    2015-06-16

    Novel coherent diffraction microscopy provides a powerful lensless imaging method to obtain a better understanding of the microorganism at the nanoscale. Here we demonstrated quantitative imaging of intact unstained magnetotactic bacteria using coherent X-ray diffraction microscopy combined with an iterative phase retrieval algorithm. Although the signal-to-noise ratio of the X-ray diffraction pattern from single magnetotactic bacterium is weak due to low-scattering ability of biomaterials, an 18.6 nm half-period resolution of reconstructed image was achieved by using a hybrid input-output phase retrieval algorithm. On the basis of the quantitative reconstructed images, the morphology and some intracellular structures, such as nucleoid, polyβ-hydroxybutyrate granules, and magnetosomes, were identified, which were also confirmed by scanning electron microscopy and energy dispersive spectroscopy. With the benefit from the quantifiability of coherent diffraction imaging, for the first time to our knowledge, an average density of magnetotactic bacteria was calculated to be ∼1.19 g/cm(3). This technique has a wide range of applications, especially in quantitative imaging of low-scattering biomaterials and multicomponent materials at nanoscale resolution. Combined with the cryogenic technique or X-ray free electron lasers, the method could image cells in a hydrated condition, which helps to maintain their natural structure.

  19. The Drizzling Cookbook

    NASA Astrophysics Data System (ADS)

    Gonzaga, S.; Biretta, J.; Wiggs, M. S.; Hsu, J. C.; Smith, T. E.; Bergeron, L.

    1998-12-01

    The drizzle software combines dithered images while preserving photometric accuracy, enhancing resolution, and removing geometric distortion. A recent upgrade also allows removal of cosmic rays from single images at each dither pointing. This document gives detailed examples illustrating drizzling procedures for six cases: WFPC2 observations of a deep field, a crowded field, a large galaxy, a planetary nebula, STIS/CCD observations of a HDF-North field, and NICMOS/NIC2 observations of the Egg Nebula. Command scripts and input images for each example are available on the WFPC2 WWW website. Users are encouraged to retrieve the data for the case that most closely resembles their own data and then practice and experiment drizzling the example.

  20. Image-derived and arterial blood sampled input functions for quantitative PET imaging of the angiotensin II subtype 1 receptor in the kidney

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Tao; Tsui, Benjamin M. W.; Li, Xin

    Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less

  1. Fourier domain image fusion for differential X-ray phase-contrast breast imaging.

    PubMed

    Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne

    2017-04-01

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  3. Optimization of input parameters of acoustic-transfection for the intracellular delivery of macromolecules using FRET-based biosensors

    NASA Astrophysics Data System (ADS)

    Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.

    2016-03-01

    Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.

  4. Effective seat-to-head transmissibility in whole-body vibration: Effects of posture and arm position

    NASA Astrophysics Data System (ADS)

    Rahmatalla, Salam; DeShaw, Jonathan

    2011-12-01

    Seat-to-head transmissibility is a biomechanical measure that has been widely used for many decades to evaluate seat dynamics and human response to vibration. Traditionally, transmissibility has been used to correlate single-input or multiple-input with single-output motion; it has not been effectively used for multiple-input and multiple-output scenarios due to the complexity of dealing with the coupled motions caused by the cross-axis effect. This work presents a novel approach to use transmissibility effectively for single- and multiple-input and multiple-output whole-body vibrations. In this regard, the full transmissibility matrix is transformed into a single graph, such as those for single-input and single-output motions. Singular value decomposition and maximum distortion energy theory were used to achieve the latter goal. Seat-to-head transmissibility matrices for single-input/multiple-output in the fore-aft direction, single-input/multiple-output in the vertical direction, and multiple-input/multiple-output directions are investigated in this work. A total of ten subjects participated in this study. Discrete frequencies of 0.5-16 Hz were used for the fore-aft direction using supported and unsupported back postures. Random ride files from a dozer machine were used for the vertical and multiple-axis scenarios considering two arm postures: using the armrests or grasping the steering wheel. For single-input/multiple-output, the results showed that the proposed method was very effective in showing the frequencies where the transmissibility is mostly sensitive for the two sitting postures and two arm positions. For multiple-input/multiple-output, the results showed that the proposed effective transmissibility indicated higher values for the armrest-supported posture than for the steering-wheel-supported posture.

  5. Single-photon-level quantum image memory based on cold atomic ensembles

    PubMed Central

    Ding, Dong-Sheng; Zhou, Zhi-Yuan; Shi, Bao-Sen; Guo, Guang-Can

    2013-01-01

    A quantum memory is a key component for quantum networks, which will enable the distribution of quantum information. Its successful development requires storage of single-photon light. Encoding photons with spatial shape through higher-dimensional states significantly increases their information-carrying capability and network capacity. However, constructing such quantum memories is challenging. Here we report the first experimental realization of a true single-photon-carrying orbital angular momentum stored via electromagnetically induced transparency in a cold atomic ensemble. Our experiments show that the non-classical pair correlation between trigger photon and retrieved photon is retained, and the spatial structure of input and retrieved photons exhibits strong similarity. More importantly, we demonstrate that single-photon coherence is preserved during storage. The ability to store spatial structure at the single-photon level opens the possibility for high-dimensional quantum memories. PMID:24084711

  6. Pinched-flow hydrodynamic stretching of single-cells.

    PubMed

    Dudani, Jaideep S; Gossett, Daniel R; Tse, Henry T K; Di Carlo, Dino

    2013-09-21

    Reorganization of cytoskeletal networks, condensation and decondensation of chromatin, and other whole cell structural changes often accompany changes in cell state and can reflect underlying disease processes. As such, the observable mechanical properties, or mechanophenotype, which is closely linked to intracellular architecture, can be a useful label-free biomarker of disease. In order to make use of this biomarker, a tool to measure cell mechanical properties should accurately characterize clinical specimens that consist of heterogeneous cell populations or contain small diseased subpopulations. Because of the heterogeneity and potential for rare populations in clinical samples, single-cell, high-throughput assays are ideally suited. Hydrodynamic stretching has recently emerged as a powerful method for carrying out mechanical phenotyping. Importantly, this method operates independently of molecular probes, reducing cost and sample preparation time, and yields information-rich signatures of cell populations through significant image analysis automation, promoting more widespread adoption. In this work, we present an alternative mode of hydrodynamic stretching where inertially-focused cells are squeezed in flow by perpendicular high-speed pinch flows that are extracted from the single inputted cell suspension. The pinched-flow stretching method reveals expected differences in cell deformability in two model systems. Furthermore, hydraulic circuit design is used to tune stretching forces and carry out multiple stretching modes (pinched-flow and extensional) in the same microfluidic channel with a single fluid input. The ability to create a self-sheathing flow from a single input solution should have general utility for other cytometry systems and the pinched-flow design enables an order of magnitude higher throughput (65,000 cells s(-1)) compared to our previously reported deformability cytometry method, which will be especially useful for identification of rare cell populations in clinical body fluids in the future.

  7. CHRONIS: an animal chromosome image database.

    PubMed

    Toyabe, Shin-Ichi; Akazawa, Kouhei; Fukushi, Daisuke; Fukui, Kiichi; Ushiki, Tatsuo

    2005-01-01

    We have constructed a database system named CHRONIS (CHROmosome and Nano-Information System) to collect images of animal chromosomes and related nanotechnological information. CHRONIS enables rapid sharing of information on chromosome research among cell biologists and researchers in other fields via the Internet. CHRONIS is also intended to serve as a liaison tool for researchers who work in different centers. The image database contains more than 3,000 color microscopic images, including karyotypic images obtained from more than 1,000 species of animals. Researchers can browse the contents of the database using a usual World Wide Web interface in the following URL: http://chromosome.med.niigata-u.ac.jp/chronis/servlet/chronisservlet. The system enables users to input new images into the database, to locate images of interest by keyword searches, and to display the images with detailed information. CHRONIS has a wide range of applications, such as searching for appropriate probes for fluorescent in situ hybridization, comparing various kinds of microscopic images of a single species, and finding researchers working in the same field of interest.

  8. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.

  9. Optimization of DSC MRI Echo Times for CBV Measurements Using Error Analysis in a Pilot Study of High-Grade Gliomas.

    PubMed

    Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C

    2017-09-01

    The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P < .001). Greater heterogeneity was observed in the optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.

  10. Fast template matching with polynomials.

    PubMed

    Omachi, Shinichiro; Omachi, Masako

    2007-08-01

    Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.

  11. Evaluating total inorganic nitrogen in coastal waters through fusion of multi-temporal RADARSAT-2 and optical imagery using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Meiling; Liu, Xiangnan; Li, Jin; Ding, Chao; Jiang, Jiale

    2014-12-01

    Satellites routinely provide frequent, large-scale, near-surface views of many oceanographic variables pertinent to plankton ecology. However, the nutrient fertility of water can be challenging to detect accurately using remote sensing technology. This research has explored an approach to estimate the nutrient fertility in coastal waters through the fusion of synthetic aperture radar (SAR) images and optical images using the random forest (RF) algorithm. The estimation of total inorganic nitrogen (TIN) in the Hong Kong Sea, China, was used as a case study. In March of 2009 and May and August of 2010, a sequence of multi-temporal in situ data and CCD images from China's HJ-1 satellite and RADARSAT-2 images were acquired. Four sensitive parameters were selected as input variables to evaluate TIN: single-band reflectance, a normalized difference spectral index (NDSI) and HV and VH polarizations. The RF algorithm was used to merge the different input variables from the SAR and optical imagery to generate a new dataset (i.e., the TIN outputs). The results showed the temporal-spatial distribution of TIN. The TIN values decreased from coastal waters to the open water areas, and TIN values in the northeast area were higher than those found in the southwest region of the study area. The maximum TIN values occurred in May. Additionally, the estimation accuracy for estimating TIN was significantly improved when the SAR and optical data were used in combination rather than a single data type alone. This study suggests that this method of estimating nutrient fertility in coastal waters by effectively fusing data from multiple sensors is very promising.

  12. Quantitative myocardial perfusion from static cardiac and dynamic arterial CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.

    2018-05-01

    Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.

  13. Quantification of 11C-Laniquidar Kinetics in the Brain.

    PubMed

    Froklage, Femke E; Boellaard, Ronald; Bakker, Esther; Hendrikse, N Harry; Reijneveld, Jaap C; Schuit, Robert C; Windhorst, Albert D; Schober, Patrick; van Berckel, Bart N M; Lammertsma, Adriaan A; Postnov, Andrey

    2015-11-01

    Overexpression of the multidrug efflux transport P-glycoprotein may play an important role in pharmacoresistance. (11)C-laniquidar is a newly developed tracer of P-glycoprotein expression. The aim of this study was to develop a pharmacokinetic model for quantification of (11)C-laniquidar uptake and to assess its test-retest variability. Two (test-retest) dynamic (11)C-laniquidar PET scans were obtained in 8 healthy subjects. Plasma input functions were obtained using online arterial blood sampling with metabolite corrections derived from manual samples. Coregistered T1 MR images were used for region-of-interest definition. Time-activity curves were analyzed using various plasma input compartmental models. (11)C-laniquidar was metabolized rapidly, with a parent plasma fraction of 50% at 10 min after tracer injection. In addition, the first-pass extraction of (11)C-laniquidar was low. (11)C-laniquidar time-activity curves were best fitted to an irreversible single-tissue compartment (1T1K) model using conventional models. Nevertheless, significantly better fits were obtained using 2 parallel single-tissue compartments, one for parent tracer and the other for labeled metabolites (dual-input model). Robust K1 results were also obtained by fitting the first 5 min of PET data to the 1T1K model, at least when 60-min plasma input data were used. For both models, the test-retest variability of (11)C-laniquidar rate constant for transfer from arterial plasma to tissue (K1) was approximately 19%. The accurate quantification of (11)C-laniquidar kinetics in the brain is hampered by its fast metabolism and the likelihood that labeled metabolites enter the brain. Best fits for the entire 60 min of data were obtained using a dual-input model, accounting for uptake of (11)C-laniquidar and its labeled metabolites. Alternatively, K1 could be obtained from a 5-min scan using a standard 1T1K model. In both cases, the test-retest variability of K1 was approximately 19%. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  14. A low-power CMOS trans-impedance amplifier for FM/cw ladar imaging system

    NASA Astrophysics Data System (ADS)

    Hu, Kai; Zhao, Yi-qiang; Sheng, Yun; Zhao, Hong-liang; Yu, Hai-xia

    2013-09-01

    A scannerless ladar imaging system based on a unique frequency modulation/continuous wave (FM/cw) technique is able to entirely capture the target environment, using a focal plane array to construct a 3D picture of the target. This paper presents a low power trans-impedance amplifier (TIA) designed and implemented by 0.18 μm CMOS technology, which is used in the FM/cw imaging ladar with a 64×64 metal-semiconductor-metal(MSM) self-mixing detector array. The input stage of the operational amplifier (op amp) in TIA is realized with folded cascade structure to achieve large open loop gain and low offset. The simulation and test results of TIA with MSM detectors indicate that the single-end trans-impedance gain is beyond 100 kΩ, and the -3 dB bandwidth of Op Amp is beyond 60 MHz. The input common mode voltage ranges from 0.2 V to 1.5 V, and the power dissipation is reduced to 1.8 mW with a supply voltage of 3.3 V. The performance test results show that the TIA is a candidate for preamplifier of the read-out integrated circuit (ROIC) in the FM/cw scannerless ladar imaging system.

  15. Asic developments for radiation imaging applications: The medipix and timepix family

    NASA Astrophysics Data System (ADS)

    Ballabriga, Rafael; Campbell, Michael; Llopart, Xavier

    2018-01-01

    Hybrid pixel detectors were developed to meet the requirements for tracking in the inner layers at the LHC experiments. With low input capacitance per channel (10-100 fF) it is relatively straightforward to design pulse processing readout electronics with input referred noise of ∼ 100 e-rms and pulse shaping times consistent with tagging of events to a single LHC bunch crossing providing clean 'images' of the ionising tracks generated. In the Medipix Collaborations the same concept has been adapted to provide practically noise hit free imaging in a wide range of applications. This paper reports on the development of three generations of readout ASICs. Two distinctive streams of development can be identified: the Medipix ASICs which integrate data from multiple hits on a pixel and provide the images in the form of frames and the Timepix ASICs who aim to send as much information about individual interactions as possible off-chip for further processing. One outstanding circumstance in the use of these devices has been their numerous successful applications, thanks to a large and active community of developers and users. That process has even permitted new developments for detectors for High Energy Physics. This paper reviews the ASICs themselves and details some of the many applications.

  16. WiseView: Visualizing motion and variability of faint WISE sources

    NASA Astrophysics Data System (ADS)

    Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume

    2018-06-01

    WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.

  17. Deep supervised dictionary learning for no-reference image quality assessment

    NASA Astrophysics Data System (ADS)

    Huang, Yuge; Liu, Xuesong; Tian, Xiang; Zhou, Fan; Chen, Yaowu; Jiang, Rongxin

    2018-03-01

    We propose a deep convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA), i.e., accurate prediction of image quality without a reference image. The proposed model consists of three components such as a local feature extractor that is a fully CNN, an encoding module with an inherent dictionary that aggregates local features to output a fixed-length global quality-aware image representation, and a regression module that maps the representation to an image quality score. Our model can be trained in an end-to-end manner, and all of the parameters, including the weights of the convolutional layers, the dictionary, and the regression weights, are simultaneously learned from the loss function. In addition, the model can predict quality scores for input images of arbitrary sizes in a single step. We tested our method on commonly used image quality databases and showed that its performance is comparable with that of state-of-the-art general-purpose NR-IQA algorithms.

  18. Remote sensing image segmentation based on Hadoop cloud platform

    NASA Astrophysics Data System (ADS)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  19. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.

    PubMed

    Choi, Jae-Seok; Kim, Munchurl

    2017-03-01

    Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.

  20. High resolution OCT image generation using super resolution via sparse representation

    NASA Astrophysics Data System (ADS)

    Asif, Muhammad; Akram, Muhammad Usman; Hassan, Taimur; Shaukat, Arslan; Waqar, Razi

    2017-02-01

    In this paper we propose a technique for obtaining a high resolution (HR) image from a single low resolution (LR) image -using joint learning dictionary - on the basis of image statistic research. It suggests that with an appropriate choice of an over-complete dictionary, image patches can be well represented as a sparse linear combination. Medical imaging for clinical analysis and medical intervention is being used for creating visual representations of the interior of a body, as well as visual representation of the function of some organs or tissues (physiology). A number of medical imaging techniques are in use like MRI, CT scan, X-rays and Optical Coherence Tomography (OCT). OCT is one of the new technologies in medical imaging and one of its uses is in ophthalmology where it is being used for analysis of the choroidal thickness in the eyes in healthy and disease states such as age-related macular degeneration, central serous chorioretinopathy, diabetic retinopathy and inherited retinal dystrophies. We have proposed a technique for enhancing the OCT images which can be used for clearly identifying and analyzing the particular diseases. Our method uses dictionary learning technique for generating a high resolution image from a single input LR image. We train two joint dictionaries, one with OCT images and the second with multiple different natural images, and compare the results with previous SR technique. Proposed method for both dictionaries produces HR images which are comparatively superior in quality with the other proposed method of SR. Proposed technique is very effective for noisy OCT images and produces up-sampled and enhanced OCT images.

  1. Improving the mapping of crop types in the Midwestern U.S. by fusing Landsat and MODIS satellite data

    NASA Astrophysics Data System (ADS)

    Zhu, Likai; Radeloff, Volker C.; Ives, Anthony R.

    2017-06-01

    Mapping crop types is of great importance for assessing agricultural production, land-use patterns, and the environmental effects of agriculture. Indeed, both radiometric and spatial resolution of Landsat's sensors images are optimized for cropland monitoring. However, accurate mapping of crop types requires frequent cloud-free images during the growing season, which are often not available, and this raises the question of whether Landsat data can be combined with data from other satellites. Here, our goal is to evaluate to what degree fusing Landsat with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data can improve crop-type classification. Choosing either one or two images from all cloud-free Landsat observations available for the Arlington Agricultural Research Station area in Wisconsin from 2010 to 2014, we generated 87 combinations of images, and used each combination as input into the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to predict Landsat-like images at the nominal dates of each 8-day MODIS NBAR product. Both the original Landsat and STARFM-predicted images were then classified with a support vector machine (SVM), and we compared the classification errors of three scenarios: 1) classifying the one or two original Landsat images of each combination only, 2) classifying the one or two original Landsat images plus all STARFM-predicted images, and 3) classifying the one or two original Landsat images together with STARFM-predicted images for key dates. Our results indicated that using two Landsat images as the input of STARFM did not significantly improve the STARFM predictions compared to using only one, and predictions using Landsat images between July and August as input were most accurate. Including all STARFM-predicted images together with the Landsat images significantly increased average classification error by 4% points (from 21% to 25%) compared to using only Landsat images. However, incorporating only STARFM-predicted images for key dates decreased average classification error by 2% points (from 21% to 19%) compared to using only Landsat images. In particular, if only a single Landsat image was available, adding STARFM predictions for key dates significantly decreased the average classification error by 4 percentage points from 30% to 26% (p < 0.05). We conclude that adding STARFM-predicted images can be effective for improving crop-type classification when only limited Landsat observations are available, but carefully selecting images from a full set of STARFM predictions is crucial. We developed an approach to identify the optimal subsets of all STARFM predictions, which gives an alternative method of feature selection for future research.

  2. Application of SEU imaging for analysis of device architecture using a 25 MeV/u 86Kr ion microbeam at HIRFL

    NASA Astrophysics Data System (ADS)

    Liu, Tianqi; Yang, Zhenlei; Guo, Jinlong; Du, Guanghua; Tong, Teng; Wang, Xiaohui; Su, Hong; Liu, Wenjing; Liu, Jiande; Wang, Bin; Ye, Bing; Liu, Jie

    2017-08-01

    The heavy-ion imaging of single event upset (SEU) in a flash-based field programmable gate array (FPGA) device was carried out for the first time at Heavy Ion Research Facility in Lanzhou (HIRFL). The three shift register chains with separated input and output configurations in device under test (DUT) were used to identify the corresponding logical area rapidly once an upset occurred. The logic units in DUT were partly configured in order to distinguish the registers in SEU images. Based on the above settings, the partial architecture of shift register chains in DUT was imaged by employing the microbeam of 86Kr ion with energy of 25 MeV/u in air. The results showed that the physical distribution of registers in DUT had a high consistency with its logical arrangement by comparing SEU image with logic configuration in scanned area.

  3. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  4. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  5. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  6. Detection of changes in semi-natural grasslands by cross correlation analysis with WorldView-2 images and new Landsat 8 data.

    PubMed

    Tarantino, Cristina; Adamo, Maria; Lucas, Richard; Blonda, Palma

    2016-03-15

    Focusing on a Mediterranean Natura 2000 site in Italy, the effectiveness of the cross correlation analysis (CCA) technique for quantifying change in the area of semi-natural grasslands at different spatial resolutions (grain) was evaluated. In a fine scale analysis (2 m), inputs to the CCA were a) a semi-natural grasslands layer extracted from an existing validated land cover/land use (LC/LU) map (1:5000, time T 1 ) and b) a more recent single date very high resolution (VHR) WorldView-2 image (time T 2 ), with T 2  > T 1 . The changes identified through the CCA were compared against those detected by applying a traditional post-classification comparison (PCC) technique to the same reference T 1 map and an updated T 2 map obtained by a knowledge driven classification of four multi-seasonal Worldview-2 input images. Specific changes observed were those associated with agricultural intensification and fires. The study concluded that prior knowledge (spectral class signatures, awareness of local agricultural practices and pressures) was needed for the selection of the most appropriate image (in terms of seasonality) to be acquired at T 2 . CCA was also applied to the comparison of the existing T 1 map with recent high resolution (HR) Landsat 8 OLS images. The areas of change detected at VHR and HR were broadly similar with larger error values in HR change images.

  7. Optical neural network system for pose determination of spinning satellites

    NASA Technical Reports Server (NTRS)

    Lee, Andrew; Casasent, David

    1990-01-01

    An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.

  8. Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach

    PubMed Central

    Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J

    2012-01-01

    A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617

  9. Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2016-11-20

    Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.

  10. Experimental Optoelectronic Associative Memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1992-01-01

    Optoelectronic associative memory responds to input image by displaying one of M remembered images. Which image to display determined by optoelectronic analog computation of resemblance between input image and each remembered image. Does not rely on precomputation and storage of outer-product synapse matrix. Size of memory needed to store and process images reduced.

  11. The Temporal Tuning of the Drosophila Motion Detectors Is Determined by the Dynamics of Their Input Elements.

    PubMed

    Arenz, Alexander; Drews, Michael S; Richter, Florian G; Ammer, Georg; Borst, Alexander

    2017-04-03

    Detecting the direction of motion contained in the visual scene is crucial for many behaviors. However, because single photoreceptors only signal local luminance changes, motion detection requires a comparison of signals from neighboring photoreceptors across time in downstream neuronal circuits. For signals to coincide on readout neurons that thus become motion and direction selective, different input lines need to be delayed with respect to each other. Classical models of motion detection rely on non-linear interactions between two inputs after different temporal filtering. However, recent studies have suggested the requirement for at least three, not only two, input signals. Here, we comprehensively characterize the spatiotemporal response properties of all columnar input elements to the elementary motion detectors in the fruit fly, T4 and T5 cells, via two-photon calcium imaging. Between these input neurons, we find large differences in temporal dynamics. Based on this, computer simulations show that only a small subset of possible arrangements of these input elements maps onto a recently proposed algorithmic three-input model in a way that generates a highly direction-selective motion detector, suggesting plausible network architectures. Moreover, modulating the motion detection system by octopamine-receptor activation, we find the temporal tuning of T4 and T5 cells to be shifted toward higher frequencies, and this shift can be fully explained by the concomitant speeding of the input elements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  13. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  14. Surface- and Contour-Preserving Origamic Architecture Paper Pop-Ups.

    PubMed

    Le, Sang N; Leow, Su-Jun; Le-Nguyen, Tuong-Vu; Ruiz, Conrado; Low, Kok-Lim

    2013-08-02

    Origamic architecture (OA) is a form of papercraft that involves cutting and folding a single sheet of paper to produce a 3D pop-up, and is commonly used to depict architectural structures. Because of the strict geometric and physical constraints, OA design requires considerable skill and effort. In this paper, we present a method to automatically generate an OA design that closely depicts an input 3D model. Our algorithm is guided by a novel set of geometric conditions to guarantee the foldability and stability of the generated pop-ups. The generality of the conditions allows our algorithm to generate valid pop-up structures that are previously not accounted for by other algorithms. Our method takes a novel image-domain approach to convert the input model to an OA design. It performs surface segmentation of the input model in the image domain, and carefully represents each surface with a set of parallel patches. Patches are then modified to make the entire structure foldable and stable. Visual and quantitative comparisons of results have shown our algorithm to be significantly better than the existing methods in the preservation of contours, surfaces and volume. The designs have also been shown to more closely resemble those created by real artists.

  15. Surface and contour-preserving origamic architecture paper pop-ups.

    PubMed

    Le, Sang N; Leow, Su-Jun; Le-Nguyen, Tuong-Vu; Ruiz, Conrado; Low, Kok-Lim

    2014-02-01

    Origamic architecture (OA) is a form of papercraft that involves cutting and folding a single sheet of paper to produce a 3D pop-up, and is commonly used to depict architectural structures. Because of the strict geometric and physical constraints, OA design requires considerable skill and effort. In this paper, we present a method to automatically generate an OA design that closely depicts an input 3D model. Our algorithm is guided by a novel set of geometric conditions to guarantee the foldability and stability of the generated pop-ups. The generality of the conditions allows our algorithm to generate valid pop-up structures that are previously not accounted for by other algorithms. Our method takes a novel image-domain approach to convert the input model to an OA design. It performs surface segmentation of the input model in the image domain, and carefully represents each surface with a set of parallel patches. Patches are then modified to make the entire structure foldable and stable. Visual and quantitative comparisons of results have shown our algorithm to be significantly better than the existing methods in the preservation of contours, surfaces, and volume. The designs have also been shown to more closely resemble those created by real artists.

  16. HCP: A Flexible CNN Framework for Multi-label Image Classification.

    PubMed

    Wei, Yunchao; Xia, Wei; Lin, Min; Huang, Junshi; Ni, Bingbing; Dong, Jian; Zhao, Yao; Yan, Shuicheng

    2015-10-26

    Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and/or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5% by HCP only and 93.2% after the fusion with our complementary result in [44] based on hand-crafted features on the VOC 2012 dataset.

  17. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodigas, Timothy J.; Hinz, Philip M.; Malhotra, Renu, E-mail: rodigas@as.arizona.edu

    Planets can affect debris disk structure by creating gaps, sharp edges, warps, and other potentially observable signatures. However, there is currently no simple way for observers to deduce a disk-shepherding planet's properties from the observed features of the disk. Here we present a single equation that relates a shepherding planet's maximum mass to the debris ring's observed width in scattered light, along with a procedure to estimate the planet's eccentricity and minimum semimajor axis. We accomplish this by performing dynamical N-body simulations of model systems containing a star, a single planet, and an exterior disk of parent bodies and dustmore » grains to determine the resulting debris disk properties over a wide range of input parameters. We find that the relationship between planet mass and debris disk width is linear, with increasing planet mass producing broader debris rings. We apply our methods to five imaged debris rings to constrain the putative planet masses and orbits in each system. Observers can use our empirically derived equation as a guide for future direct imaging searches for planets in debris disk systems. In the fortuitous case of an imaged planet orbiting interior to an imaged disk, the planet's maximum mass can be estimated independent of atmospheric models.« less

  19. Applicability of common measures in multifocus image fusion comparison

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek

    2017-11-01

    Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.

  20. Recognition of lesion correspondence on two mammographic views: a new method of false-positive reduction for computerized mass detection

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Petrick, Nicholas; Chan, Heang-Ping; Paquerault, Sophie; Helvie, Mark A.; Hadjiiski, Lubomir M.

    2001-07-01

    We used the correspondence of detected structures on two views of the same breast for false-positive (FP) reduction in computerized detection of mammographic masses. For each initially detected object on one view, we considered all possible pairings with objects on the other view that fell within a radial band defined by the nipple-to-object distances. We designed a 'correspondence classifier' to classify these pairs as either the same mass (a TP-TP pair) or a mismatch (a TP-FP, FP-TP or FP-FP pair). For each pair, similarity measures of morphological and texture features were derived and used as input features in the correspondence classifier. Two-view mammograms from 94 cases were used as a preliminary data set. Initial detection provided 6.3 FPs/image at 96% sensitivity. Further FP reduction in single view resulted in 1.9 FPs/image at 80% sensitivity and 1.1 FPs/image at 70% sensitivity. By combining single-view detection with the correspondence classifier, detection accuracy improved to 1.5 FPs/image at 80% sensitivity and 0.7 FPs/image at 70% sensitivity. Our preliminary results indicate that the correspondence of geometric, morphological, and textural features of a mass on two different views provides valuable additional information for reducing FPs.

  1. Optoelectronic associative memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor)

    1993-01-01

    An associative optical memory including an input spatial light modulator (SLM) in the form of an edge enhanced liquid crystal light valve (LCLV) and a pair of memory SLM's in the form of liquid crystal televisions (LCTV's) forms a matrix array of an input image which is cross correlated with a matrix array of stored images. The correlation product is detected and nonlinearly amplified to illuminate a replica of the stored image array to select the stored image correlating with the input image. The LCLV is edge enhanced by reducing the bias frequency and voltage and rotating its orientation. The edge enhancement and nonlinearity of the photodetection improves the orthogonality of the stored image. The illumination of the replicate stored image provides a clean stored image, uncontaminated by the image comparison process.

  2. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  3. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  4. Strategies for mapping synaptic inputs on dendrites in vivo by combining two-photon microscopy, sharp intracellular recording, and pharmacology

    PubMed Central

    Levy, Manuel; Schramm, Adrien E.; Kara, Prakash

    2012-01-01

    Uncovering the functional properties of individual synaptic inputs on single neurons is critical for understanding the computational role of synapses and dendrites. Previous studies combined whole-cell patch recording to load neurons with a fluorescent calcium indicator and two-photon imaging to map subcellular changes in fluorescence upon sensory stimulation. By hyperpolarizing the neuron below spike threshold, the patch electrode ensured that changes in fluorescence associated with synaptic events were isolated from those caused by back-propagating action potentials. This technique holds promise for determining whether the existence of unique cortical feature maps across different species may be associated with distinct wiring diagrams. However, the use of whole-cell patch for mapping inputs on dendrites is challenging in large mammals, due to brain pulsations and the accumulation of fluorescent dye in the extracellular milieu. Alternatively, sharp intracellular electrodes have been used to label neurons with fluorescent dyes, but the current passing capabilities of these high impedance electrodes may be insufficient to prevent spiking. In this study, we tested whether sharp electrode recording is suitable for mapping functional inputs on dendrites in the cat visual cortex. We compared three different strategies for suppressing visually evoked spikes: (1) hyperpolarization by intracellular current injection, (2) pharmacological blockade of voltage-gated sodium channels by intracellular QX-314, and (3) GABA iontophoresis from a perisomatic electrode glued to the intracellular electrode. We found that functional inputs on dendrites could be successfully imaged using all three strategies. However, the best method for preventing spikes was GABA iontophoresis with low currents (5–10 nA), which minimally affected the local circuit. Our methods advance the possibility of determining functional connectivity in preparations where whole-cell patch may be impractical. PMID:23248588

  5. Push-broom imaging spectrometer based on planar lightwave circuit MZI array

    NASA Astrophysics Data System (ADS)

    Yang, Minyue; Li, Mingyu; He, Jian-Jun

    2017-05-01

    We propose a large aperture static imaging spectrometer (LASIS) based on planar lightwave circuit (PLC) MZI array. The imaging spectrometer works in the push-broom mode with the spectrum performed by interferometry. While the satellite/aircraft is orbiting, the same source, seen from the satellite/aircraft, moves across the aperture and enters different MZIs, while adjacent sources enter adjacent MZIs at the same time. The on-chip spectrometer consists of 256 input mode converters, followed by 256 MZIs with linearly increasing optical path delays and a detector array. Multiple chips are stick together to form the 2D image surface and receive light from the imaging lens. Two MZI arrays are proposed, one works in wavelength ranging from 500nm to 900nm with SiON(refractive index 1.6) waveguides and another ranging from 1100nm to 1700nm with SOI platform. To meet the requirements of imaging spectrometer applications, we choose large cross-section ridge waveguide to achieve polarization insensitive, maintain single mode propagation in broad spectrum and increase production tolerance. The SiON on-chip spectrometer has a spectral resolution of 80cm-1 with a footprint of 17×15mm2 and the SOI based on-chip spectrometer has a resolution of 38cm-1 with a size of 22×19mm2. The spectral and space resolution of the imaging spectrometer can be further improved by simply adding more MZIs. The on-chip waveguide MZI array based Fourier transform imaging spectrometer can provide a highly compact solution for remote sensing on unmanned aerial vehicles or satellites with advantages of small size, light weight, no moving parts and large input aperture.

  6. X-ray propagation microscopy of biological cells using waveguides as a quasipoint source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giewekemeyer, K.; Krueger, S. P.; Kalbfleisch, S.

    2011-02-15

    We have used x-ray waveguides as highly confining optical elements for nanoscale imaging of unstained biological cells using the simple geometry of in-line holography. The well-known twin-image problem is effectively circumvented by a simple and fast iterative reconstruction. The algorithm which combines elements of the classical Gerchberg-Saxton scheme and the hybrid-input-output algorithm is optimized for phase-contrast samples, well-justified for imaging of cells at multi-keV photon energies. The experimental scheme allows for a quantitative phase reconstruction from a single holographic image without detailed knowledge of the complex illumination function incident on the sample, as demonstrated for freeze-dried cells of the eukaryoticmore » amoeba Dictyostelium discoideum. The accessible resolution range is explored by simulations, indicating that resolutions on the order of 20 nm are within reach applying illumination times on the order of minutes at present synchrotron sources.« less

  7. A system for verifying models and classification maps by extraction of information from a variety of data sources

    NASA Technical Reports Server (NTRS)

    Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.

    1992-01-01

    Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).

  8. Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.

    PubMed

    Schroder, Kai; Zinke, Arno; Klein, Reinhard

    2015-02-01

    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.

  9. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

  10. Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting

    PubMed Central

    Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao

    2016-01-01

    A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications. PMID:27827837

  11. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  12. Identification of winter wheat from ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Williams, D. L.; Morain, S. A.; Barker, B.; Coiner, J. C.

    1973-01-01

    Continuing interpretation of the test area in Finney County, Kansas, has revealed that winter wheat can be successfully identified. This successful identification is based on human recognition of tonal signatures on MSS images. Several different but highly successful interpretation strategies have been employed. These strategies involve the use of both spectral and temporal inputs. Good results have been obtained from a single MSS-5 image acquired at a critical time in the crop cycle (planting). On a test sample of 54,612 acres, 89 percent of the acreage was correctly classified as wheat or non-wheat and the estimated wheat acreage (19,516 acres) was 99 percent of the actual acreage of wheat in the sample area.

  13. Contour detection improved by context-adaptive surround suppression.

    PubMed

    Sang, Qiang; Cai, Biao; Chen, Hao

    2017-01-01

    Recently, many image processing applications have taken advantage of a psychophysical and neurophysiological mechanism, called "surround suppression" to extract object contour from a natural scene. However, these traditional methods often adopt a single suppression model and a fixed input parameter called "inhibition level", which needs to be manually specified. To overcome these drawbacks, we propose a novel model, called "context-adaptive surround suppression", which can automatically control the effect of surround suppression according to image local contextual features measured by a surface estimator based on a local linear kernel. Moreover, a dynamic suppression method and its stopping mechanism are introduced to avoid manual intervention. The proposed algorithm is demonstrated and validated by a broad range of experimental results.

  14. Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera

    NASA Astrophysics Data System (ADS)

    Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.

    2004-01-01

    We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.

  15. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.

    PubMed

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-03-05

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

  16. Bio-inspired approach to multistage image processing

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan

    2017-08-01

    Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.

  17. Personal identification based on blood vessels of retinal fundus images

    NASA Astrophysics Data System (ADS)

    Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.

  18. Quickbird Satellite in-orbit Modulation Transfer Function (MTF) Measurement Using Edge, Pulse and Impulse Methods for Summer 2003

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Choi, Taeyoung; Rangaswamy, Manjunath

    2005-01-01

    The spatial characteristics of an imaging system cannot be expressed by a single number or simple statement. However, the Modulation Transfer Function (MTF) is one approach to measure the spatial quality of an imaging system. Basically, MTF is the normalized spatial frequency response of an imaging system. The frequency response of the system can be evaluated by applying an impulse input. The resulting impulse response is termed the Point Spread function (PSF). This function is a measure of the amount of blurring present in the imaging system and is itself a useful measure of spatial quality. An underlying assumption is that the imaging system is linear and shift-independent. The Fourier transform of the PSF is called the Optical Transfer Function (OTF) and the normalized magnitude of the OTF is the MTF. In addition to using an impulse input, a knife-edge in technique has also been used in this project. The sharp edge exercises an imaging system at all spatial frequencies. The profile of an edge response from an imaging system is called an Edge Spread Function (ESF). Differentiation of the ESF results in a one-dimensional version of the Point Spread Function (PSF). Finally, MTF can be calculated through use of Fourier transform of the PSF as stated previously. Every image includes noise in some degree which makes MTF of PSF estimation more difficult. To avoid the noise effects, many MTF estimation approaches use smooth numerical models. Historically, Gaussian models and Fermi functions were applied to reduce the random noise in the output profiles. The pulse-input method was used to measure the MTF of the Landsat Thematic Mapper (TM) using 8th order even functions over the San Mateo Bridge in San Francisco, California. Because the bridge width was smaller than the 30-meter ground sample distance (GSD) of the TM, the Nyquist frequency was located before the first zero-crossing point of the sinc function from the Fourier transformation of the bridge pulse. To avoid the zero-crossing points in the frequency domain from a pulse, the pulse width should be less than the width of two pixels (or 2 GSD's), but the short extent of the pulse results in a poor signal-to-noise ratio. Similarly, for a high-resolution satellite imaging system such as Quickbird, the input pulse width was critical because of the zero crossing points and noise present in the background area. It is important, therefore, that the width of the input pulse be appropriately sized. Finally, the MTF was calculated by taking ratio between Fourier transform of output and Fourier transform of input. Regardless of whether the edge, pulse and impulse target method is used, the orientation of the targets is critical in order to obtain uniformly spaced sub-pixel data points. When the orientation is incorrect, sample data points tend to be located in clusters that result in poor reconstruction of the edge or pulse profiles. Thus, a compromise orientation must be selected so that all spectral bands can be accommodated. This report continues by outlining the objectives in Section 2, procedures followed in Section 3, descriptions of the field campaigns in Section 4, results in Section 5, and a brief summary in Section 6.

  19. Image display device in digital TV

    DOEpatents

    Choi, Seung Jong [Seoul, KR

    2006-07-18

    Disclosed is an image display device in a digital TV that is capable of carrying out the conversion into various kinds of resolution by using single bit map data in the digital TV. The image display device includes: a data processing part for executing bit map conversion, compression, restoration and format-conversion for text data; a memory for storing the bit map data obtained according to the bit map conversion and compression in the data processing part and image data inputted from an arbitrary receiving part, the receiving part receiving one of digital image data and analog image data; an image outputting part for reading the image data from the memory; and a display processing part for mixing the image data read from the image outputting part and the bit map data converted in format from the a data processing part. Therefore, the image display device according to the present invention can convert text data in such a manner as to correspond with various resolution, carry out the compression for bit map data, thereby reducing the memory space, and support text data of an HTML format, thereby providing the image with the text data of various shapes.

  20. Continuous-time ΣΔ ADC with implicit variable gain amplifier for CMOS image sensor.

    PubMed

    Tang, Fang; Bermak, Amine; Abbes, Amira; Benammar, Mohieddine Amor

    2014-01-01

    This paper presents a column-parallel continuous-time sigma delta (CTSD) ADC for mega-pixel resolution CMOS image sensor (CIS). The sigma delta modulator is implemented with a 2nd order resistor/capacitor-based loop filter. The first integrator uses a conventional operational transconductance amplifier (OTA), for the concern of a high power noise rejection. The second integrator is realized with a single-ended inverter-based amplifier, instead of a standard OTA. As a result, the power consumption is reduced, without sacrificing the noise performance. Moreover, the variable gain amplifier in the traditional column-parallel read-out circuit is merged into the front-end of the CTSD modulator. By programming the input resistance, the amplitude range of the input current can be tuned with 8 scales, which is equivalent to a traditional 2-bit preamplification function without consuming extra power and chip area. The test chip prototype is fabricated using 0.18 μm CMOS process and the measurement result shows an ADC power consumption lower than 63.5 μW under 1.4 V power supply and 50 MHz clock frequency.

  1. Improved spatial and temporal characteristics of ionospheric irregularities and polar mesospheric summer echoes using coherent MIMO and aperture synthesis radar imaging

    NASA Astrophysics Data System (ADS)

    Chau, J. L.; Urco, J. M.; Milla, M. A.; Vierinen, J.

    2017-12-01

    We have recently implemented Multiple-input multiple-output (MIMO) radar techniques to resolve temporal and spatial ambiguities of ionospheric and atmospheric irregularities, with improve capabilities than previously experiments using single-input multi-output (SIMO) techniques. SIMO techniques in the atmospheric and ionospheric coherent scatter radar field are usually called aperture synthesis radar imaging. Our implementations have done at the Jicamarca Radio Observatory (JRO) in Lima, Peru, and at the Middle Atmosphere Alomar Radar System (MAARSY) in Andenes, Norway, to study equatorial electrojet (EEJ) field-aligned irregularities and polar mesospheric summer echoes (PMSE), respectively. Figure 1 shows an example of a configuration used at MAARSY and the comparison between the SIMO and MIMO resulting antenna point spread functions, respectively. Although in this work we present the details of the implementations at each facility, we will focus on the observed peculiarities of each phenomenon, making emphasis in the underlying physical mechanisms that govern their existence and their spatial and temporal modulation. For example, what are the typical horizontal scales of PMSE variability in both intensity and wind field?

  2. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  3. Effect of random phase mask on input plane in photorefractive authentic memory with two-wave encryption method

    NASA Astrophysics Data System (ADS)

    Mita, Akifumi; Okamoto, Atsushi; Funakoshi, Hisatoshi

    2004-06-01

    We have proposed an all-optical authentic memory with the two-wave encryption method. In the recording process, the image data are encrypted to a white noise by the random phase masks added on the input beam with the image data and the reference beam. Only reading beam with the phase-conjugated distribution of the reference beam can decrypt the encrypted data. If the encrypted data are read out with an incorrect phase distribution, the output data are transformed into a white noise. Moreover, during read out, reconstructions of the encrypted data interfere destructively resulting in zero intensity. Therefore our memory has a merit that we can detect unlawful accesses easily by measuring the output beam intensity. In our encryption method, the random phase mask on the input plane plays important roles in transforming the input image into a white noise and prohibiting to decrypt a white noise to the input image by the blind deconvolution method. Without this mask, when unauthorized users observe the output beam by using CCD in the readout with the plane wave, the completely same intensity distribution as that of Fourier transform of the input image is obtained. Therefore the encrypted image will be decrypted easily by using the blind deconvolution method. However in using this mask, even if unauthorized users observe the output beam using the same method, the encrypted image cannot be decrypted because the observed intensity distribution is dispersed at random by this mask. Thus it can be said the robustness is increased by this mask. In this report, we compare two correlation coefficients, which represents the degree of a white noise of the output image, between the output image and the input image in using this mask or not. We show that the robustness of this encryption method is increased as the correlation coefficient is improved from 0.3 to 0.1 by using this mask.

  4. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  5. Real-time determination of sarcomere length of a single cardiomyocyte during contraction

    PubMed Central

    Kalda, Mari; Vendelin, Marko

    2013-01-01

    Sarcomere length of a cardiomyocyte is an important control parameter for physiology studies on a single cell level; for instance, its accurate determination in real time is essential for performing single cardiomyocyte contraction experiments. The aim of this work is to develop an efficient and accurate method for estimating a mean sarcomere length of a contracting cardiomyocyte using microscopy images as an input. The novelty in developed method lies in 1) using unbiased measure of similarities to eliminate systematic errors from conventional autocorrelation function (ACF)-based methods when applied to region of interest of an image, 2) using a semianalytical, seminumerical approach for evaluating the similarity measure to take into account spatial dependence of neighboring image pixels, and 3) using a detrend algorithm to extract the sarcomere striation pattern content from the microscopy images. The developed sarcomere length estimation procedure has superior computational efficiency and estimation accuracy compared with the conventional ACF and spectral analysis-based methods using fast Fourier transform. As shown by analyzing synthetic images with the known periodicity, the estimates obtained by the developed method are more accurate at the subpixel level than ones obtained using ACF analysis. When applied in practice on rat cardiomyocytes, our method was found to be robust to the choice of the region of interest that may 1) include projections of carbon fibers and nucleus, 2) have uneven background, and 3) be slightly disoriented with respect to average direction of sarcomere striation pattern. The developed method is implemented in open-source software. PMID:23255581

  6. A deep learning framework for the automated inspection of complex dual-energy x-ray cargo imagery

    NASA Astrophysics Data System (ADS)

    Rogers, Thomas W.; Jaccard, Nicolas; Griffin, Lewis D.

    2017-05-01

    Previously, we investigated the use of Convolutional Neural Networks (CNNs) to detect so-called Small Metallic Threats (SMTs) hidden amongst legitimate goods inside a cargo container. We trained a CNN from scratch on data produced by a Threat Image Projection (TIP) framework that generates images with realistic variation to robustify performance. The system achieved 90% detection of containers that contained a single SMT, while raising 6% false positives on benign containers. The best CNN architecture used the raw high energy image (single-energy) and its logarithm as input channels. Use of the logarithm improved performance, thus echoing studies on human operator performance. However, it is an unexpected result with CNNs. In this work, we (i) investigate methods to exploit material information captured in dual-energy images, and (ii) introduce a new CNN training scheme that generates `spot-the-difference' benign and threat pairs on-the-fly. To the best of our knowledge, this is the first time that CNNs have been applied directly to raw dual-energy X-ray imagery, in any field. To exploit dual-energy, we experiment with adapting several physics-derived approaches to material discrimination from the cargo literature, and introduce three novel variants. We hypothesise that CNNs can implicitly learn about the material characteristics of objects from the raw dual-energy images, and use this to suppress false positives. The best performing method is able to detect 95% of containers containing a single SMT, while raising 0.4% false positives on benign containers. This is a step change improvement in performance over our prior work

  7. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  8. Ultrafast all-optical imaging technique using low-temperature grown GaAs/AlxGa1 - xAs multiple-quantum-well semiconductor

    NASA Astrophysics Data System (ADS)

    Gao, Guilong; Tian, Jinshou; Wang, Tao; He, Kai; Zhang, Chunmin; Zhang, Jun; Chen, Shaorong; Jia, Hui; Yuan, Fenfang; Liang, Lingliang; Yan, Xin; Li, Shaohui; Wang, Chao; Yin, Fei

    2017-11-01

    We report and experimentally demonstrate an ultrafast all-optical imaging technique capable of single-shot ultrafast recording with a picosecond-scale temporal resolution and a micron-order two-dimensional spatial resolution. A GaAs/AlxGa1 - xAs multiple-quantum-well (MQW) semiconductor with a picosecond response time, grown using molecular beam epitaxy (MBE) at a low temperature (LT), is used for the first time in ultrafast imaging technology. The semiconductor transforms the signal beam information to the probe beam, the birefringent delay crystal time-serializes the input probe beam, and the beam displacer maps different polarization probe beams onto different detector locations, resulting in two frames with an approximately 9 ps temporal separation and approximately 25 lp/mm spatial resolution in the visible range.

  9. Pulse-Echo Ultrasonic Imaging Method for Eliminating Sample Thickness Variation Effects

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1997-01-01

    A pulse-echo, immersion method for ultrasonic evaluation of a material which accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer and automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjustments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.

  10. Convolution Operation of Optical Information via Quantum Storage

    NASA Astrophysics Data System (ADS)

    Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan

    2017-06-01

    We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.

  11. Online image classification under monotonic decision boundary constraint

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Allebach, Jan; Wagner, Jerry; Pitta, Brandi; Larson, David; Guo, Yandong

    2015-01-01

    Image classification is a prerequisite for copy quality enhancement in all-in-one (AIO) device that comprises a printer and scanner, and which can be used to scan, copy and print. Different processing pipelines are provided in an AIO printer. Each of the processing pipelines is designed specifically for one type of input image to achieve the optimal output image quality. A typical approach to this problem is to apply Support Vector Machine to classify the input image and feed it to its corresponding processing pipeline. The online training SVM can help users to improve the performance of classification as input images accumulate. At the same time, we want to make quick decision on the input image to speed up the classification which means sometimes the AIO device does not need to scan the entire image to make a final decision. These two constraints, online SVM and quick decision, raise questions regarding: 1) what features are suitable for classification; 2) how we should control the decision boundary in online SVM training. This paper will discuss the compatibility of online SVM and quick decision capability.

  12. Generating Mosaics of Astronomical Images

    NASA Technical Reports Server (NTRS)

    Bergou, Attila; Berriman, Bruce; Good, John; Jacob, Joseph; Katz, Daniel; Laity, Anastasia; Prince, Thomas; Williams, Roy

    2005-01-01

    "Montage" is the name of a service of the National Virtual Observatory (NVO), and of software being developed to implement the service via the World Wide Web. Montage generates science-grade custom mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. "Science-grade" in this context signifies that terrestrial and instrumental features are removed from images in a way that can be described quantitatively. "Custom" refers to user-specified parameters of projection, coordinates, size, rotation, and spatial sampling. The greatest value of Montage is expected to lie in its ability to analyze images at multiple wavelengths, delivering them on a common projection, coordinate system, and spatial sampling, and thereby enabling further analysis as though they were part of a single, multi-wavelength image. Montage will be deployed as a computation-intensive service through existing astronomy portals and other Web sites. It will be integrated into the emerging NVO architecture and will be executed on the TeraGrid. The Montage software will also be portable and publicly available.

  13. Single-Frame Terrain Mapping Software for Robotic Vehicles

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.

    2011-01-01

    This software is a component in an unmanned ground vehicle (UGV) perception system that builds compact, single-frame terrain maps for distribution to other systems, such as a world model or an operator control unit, over a local area network (LAN). Each cell in the map encodes an elevation value, terrain classification, object classification, terrain traversability, terrain roughness, and a confidence value into four bytes of memory. The input to this software component is a range image (from a lidar or stereo vision system), and optionally a terrain classification image and an object classification image, both registered to the range image. The single-frame terrain map generates estimates of the support surface elevation, ground cover elevation, and minimum canopy elevation; generates terrain traversability cost; detects low overhangs and high-density obstacles; and can perform geometry-based terrain classification (ground, ground cover, unknown). A new origin is automatically selected for each single-frame terrain map in global coordinates such that it coincides with the corner of a world map cell. That way, single-frame terrain maps correctly line up with the world map, facilitating the merging of map data into the world map. Instead of using 32 bits to store the floating-point elevation for a map cell, the vehicle elevation is assigned to the map origin elevation and reports the change in elevation (from the origin elevation) in terms of the number of discrete steps. The single-frame terrain map elevation resolution is 2 cm. At that resolution, terrain elevation from 20.5 to 20.5 m (with respect to the vehicle's elevation) is encoded into 11 bits. For each four-byte map cell, bits are assigned to encode elevation, terrain roughness, terrain classification, object classification, terrain traversability cost, and a confidence value. The vehicle s current position and orientation, the map origin, and the map cell resolution are all included in a header for each map. The map is compressed into a vector prior to delivery to another system.

  14. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  15. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    PubMed

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  17. Reconstruction of input functions from a dynamic PET image with sequential administration of 15O2 and [Formula: see text] for noninvasive and ultra-rapid measurement of CBF, OEF, and CMRO2.

    PubMed

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro

    2018-05-01

    CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .

  18. Vector generator scan converter

    DOEpatents

    Moore, James M.; Leighton, James F.

    1990-01-01

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.

  19. Benthic Habitat Mapping by Combining Lyzenga’s Optical Model and Relative Water Depth Model in Lintea Island, Southeast Sulawesi

    NASA Astrophysics Data System (ADS)

    Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.

    2017-12-01

    Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.

  20. Super-Resolution Enhancement From Multiple Overlapping Images: A Fractional Area Technique

    NASA Astrophysics Data System (ADS)

    Michaels, Joshua A.

    With the availability of large quantities of relatively low-resolution data from several decades of space borne imaging, methods of creating an accurate, higher-resolution image from the multiple lower-resolution images (i.e. super-resolution), have been developed almost since such imagery has been around. The fractional-area super-resolution technique developed in this thesis has never before been documented. Satellite orbits, like Landsat, have a quantifiable variation, which means each image is not centered on the exact same spot more than once and the overlapping information from these multiple images may be used for super-resolution enhancement. By splitting a single initial pixel into many smaller, desired pixels, a relationship can be created between them using the ratio of the area within the initial pixel. The ideal goal for this technique is to obtain smaller pixels with exact values and no error, yielding a better potential result than those methods that yield interpolated pixel values with consequential loss of spatial resolution. A Fortran 95 program was developed to perform all calculations associated with the fractional-area super-resolution technique. The fractional areas are calculated using traditional trigonometry and coordinate geometry and Linear Algebra Package (LAPACK; Anderson et al., 1999) is used to solve for the higher-resolution pixel values. In order to demonstrate proof-of-concept, a synthetic dataset was created using the intrinsic Fortran random number generator and Adobe Illustrator CS4 (for geometry). To test the real-life application, digital pictures from a Sony DSC-S600 digital point-and-shoot camera with a tripod were taken of a large US geological map under fluorescent lighting. While the fractional-area super-resolution technique works in perfect synthetic conditions, it did not successfully produce a reasonable or consistent solution in the digital photograph enhancement test. The prohibitive amount of processing time (up to 60 days for a relatively small enhancement area) severely limits the practical usefulness of fraction-area super-resolution. Fractional-area super-resolution is very sensitive to relative input image co-registration, which must be accurate to a sub-pixel degree. However, use of this technique, if input conditions permit, could be applied as a "pinpoint" super-resolution technique. Such an application could be possible by only applying it to only very small areas with very good input image co-registration.

  1. An algorithm to estimate building heights from Google street-view imagery using single view metrology across a representational state transfer system

    NASA Astrophysics Data System (ADS)

    Díaz, Elkin; Arguello, Henry

    2016-05-01

    Urban ecosystem studies require monitoring, controlling and planning to analyze building density, urban density, urban planning, atmospheric modeling and land use. In urban planning, there are many methods for building height estimation using optical remote sensing images. These methods however, highly depend on sun illumination and cloud-free weather. In contrast, high resolution synthetic aperture radar provides images independent from daytime and weather conditions, although, these images rely on special hardware and expensive acquisition. Most of the biggest cities around the world have been photographed by Google street view under different conditions. Thus, thousands of images from the principal streets of a city can be accessed online. The availability of this and similar rich city imagery such as StreetSide from Microsoft, represents huge opportunities in computer vision because these images can be used as input in many applications such as 3D modeling, segmentation, recognition and stereo correspondence. This paper proposes a novel algorithm to estimate building heights using public Google Street-View imagery. The objective of this work is to obtain thousands of geo-referenced images from Google Street-View using a representational state transfer system, and estimate their average height using single view metrology. Furthermore, the resulting measurements and image metadata are used to derive a layer of heights in a Google map available online. The experimental results show that the proposed algorithm can estimate an accurate average building height map of thousands of images using Google Street-View Imagery of any city.

  2. Four-dimensional optical coherence tomography imaging of total liquid ventilated rats

    NASA Astrophysics Data System (ADS)

    Kirsten, Lars; Schnabel, Christian; Gaertner, Maria; Koch, Edmund

    2013-06-01

    Optical coherence tomography (OCT) can be utilized for the spatially and temporally resolved visualization of alveolar tissue and its dynamics in rodent models, which allows the investigation of lung dynamics on the microscopic scale of single alveoli. The findings could provide experimental input data for numerical simulations of lung tissue mechanics and could support the development of protective ventilation strategies. Real four-dimensional OCT imaging permits the acquisition of several OCT stacks within one single ventilation cycle. Thus, the entire four-dimensional information is directly obtained. Compared to conventional virtual four-dimensional OCT imaging, where the image acquisition is extended over many ventilation cycles and is triggered on pressure levels, real four-dimensional OCT is less vulnerable against motion artifacts and non-reproducible movement of the lung tissue over subsequent ventilation cycles, which widely reduces image artifacts. However, OCT imaging of alveolar tissue is affected by refraction and total internal reflection at air-tissue interfaces. Thus, only the first alveolar layer beneath the pleura is visible. To circumvent this effect, total liquid ventilation can be carried out to match the refractive indices of lung tissue and the breathing medium, which improves the visibility of the alveolar structure, the image quality and the penetration depth and provides the real structure of the alveolar tissue. In this study, a combination of four-dimensional OCT imaging with total liquid ventilation allowed the visualization of the alveolar structure in rat lung tissue benefiting from the improved depth range beneath the pleura and from the high spatial and temporal resolution.

  3. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  4. Surprise! Infants consider possible bases of generalization for a single input example.

    PubMed

    Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh

    2015-01-01

    Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization might occur when even a single input example is surprising, given the learner's current model of the domain. To test the possibility that infants are able to generalize based on a single example, we familiarized 9-month-olds with a single three-syllable input example that contained either one surprising feature (syllable repetition, Experiment 1) or two features (repetition and a rare syllable, Experiment 2). In both experiments, infants generalized only to new strings that maintained all of the surprising features from familiarization. This research suggests that surprise can promote very rapid generalization. © 2014 John Wiley & Sons Ltd.

  5. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  6. Neurophysiology of Flight in Wild-Type and a Mutant Drosophila

    PubMed Central

    Levine, Jon D.; Wyman, Robert J.

    1973-01-01

    We report the flight motor output pattern in Drosophila melanogaster and the neural network responsible for it, and describe the bursting motor output pattern in a mutant. There are 26 singly-innervated muscle fibers. There are two basic firing patterns: phase progression, shown by units that receive a common input but have no cross-connections, and phase stability, in which synergic units, receiving a common input and inhibiting each other, fire in a repeating sequence. Flies carrying the mutation stripe cannot fly. Their motor output is reduced to a short duration, high-frequency burst, but the patterning within bursts shows many of the characteristics of the wild type. The mutation is restricted in its effect, as the nervous system has normal morphology by light microscopy and other behaviors of the mutant are normal. Images PMID:4197927

  7. Saliency Detection on Light Field.

    PubMed

    Li, Nianyi; Ye, Jinwei; Ji, Yu; Ling, Haibin; Yu, Jingyi

    2017-08-01

    Existing saliency detection approaches use images as inputs and are sensitive to foreground/background similarities, complex background textures, and occlusions. We explore the problem of using light fields as input for saliency detection. Our technique is enabled by the availability of commercial plenoptic cameras that capture the light field of a scene in a single shot. We show that the unique refocusing capability of light fields provides useful focusness, depths, and objectness cues. We further develop a new saliency detection algorithm tailored for light fields. To validate our approach, we acquire a light field database of a range of indoor and outdoor scenes and generate the ground truth saliency map. Experiments show that our saliency detection scheme can robustly handle challenging scenarios such as similar foreground and background, cluttered background, complex occlusions, etc., and achieve high accuracy and robustness.

  8. Simultaneous two-wavelength tri-window common-path digital holography

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Shan, Mingguang; Zhong, Zhi

    2018-06-01

    Two-wavelength common-path off-axis digital holography is proposed with a tri-window in a single shot. It is established using a standard 4f optical image system with a 2D Ronchi grating placed outside the Fourier plane. The input plane consists of three windows: one for the object and the other two for reference. Aided by a spatial filter together with two orthogonal linear polarizers in the Fourier plane, the two-wavelength information is encoded into a multiplexed hologram with two orthogonal spatial frequencies that enable full separation of spectral information in the digital Fourier space without resolution loss. Theoretical analysis and experimental results illustrate that our approach can simultaneously perform quantitative phase imaging at two wavelengths.

  9. Team Electronic Gameplay Combining Different Means of Control

    NASA Technical Reports Server (NTRS)

    Palsson, Olafur S. (Inventor); Pope, Alan T. (Inventor)

    2014-01-01

    Disclosed are methods and apparatuses provided for modifying the effect of an operator controlled input device on an interactive device to encourage the self-regulation of at least one physiological activity by a person different than the operator. The interactive device comprises a display area which depicts images and apparatus for receiving at least one input from the operator controlled input device to thus permit the operator to control and interact with at least some of the depicted images. One effect modification comprises measurement of the physiological activity of a person different from the operator, while modifying the ability of the operator to control and interact with at least some of the depicted images by modifying the input from the operator controlled input device in response to changes in the measured physiological signal.

  10. Rapid Monte Carlo simulation of detector DQE(f)

    PubMed Central

    Star-Lack, Josh; Sun, Mingshan; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca; Abel, Eric

    2014-01-01

    Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 107 − 109 detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation performed using clinical fluence levels. Results: On the order of only 10–100 gamma photons per flood image were required to be detected to avoid biasing the NPS estimate. This allowed for a factor of 107 reduction in fluence compared to clinical levels with no loss of accuracy. An optimal signal-to-noise ratio (SNR) was achieved by increasing the number of flood images from a typical value of 100 up to 500, thereby illustrating the importance of flood image quantity over the number of gammas per flood. For the point-spread ensemble technique, an additional 2× reduction in the number of incident gammas was realized. As a result, when modeling gamma transport in a thick pixelated array, the simulation time was reduced from 2.5 × 106 CPU min if using clinical fluence levels to 3.1 CPU min if using optimized fluence levels while also producing a higher SNR. The AS1000 DQE(f) simulation entailing both optical and radiative transport matched experimental results to within 11%, and required 14.5 min to complete on a single CPU. Conclusions: The authors demonstrate the feasibility of accurately modeling x-ray detector DQE(f) with completion times on the order of several minutes using a single CPU. Convenience of simulation can be achieved using GEANT4 which offers both gamma and optical photon transport capabilities. PMID:24593734

  11. Rapid Monte Carlo simulation of detector DQE(f)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Star-Lack, Josh, E-mail: josh.starlack@varian.com; Sun, Mingshan; Abel, Eric

    2014-03-15

    Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 10{supmore » 7} − 10{sup 9} detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation performed using clinical fluence levels. Results: On the order of only 10–100 gamma photons per flood image were required to be detected to avoid biasing the NPS estimate. This allowed for a factor of 10{sup 7} reduction in fluence compared to clinical levels with no loss of accuracy. An optimal signal-to-noise ratio (SNR) was achieved by increasing the number of flood images from a typical value of 100 up to 500, thereby illustrating the importance of flood image quantity over the number of gammas per flood. For the point-spread ensemble technique, an additional 2× reduction in the number of incident gammas was realized. As a result, when modeling gamma transport in a thick pixelated array, the simulation time was reduced from 2.5 × 10{sup 6} CPU min if using clinical fluence levels to 3.1 CPU min if using optimized fluence levels while also producing a higher SNR. The AS1000 DQE(f) simulation entailing both optical and radiative transport matched experimental results to within 11%, and required 14.5 min to complete on a single CPU. Conclusions: The authors demonstrate the feasibility of accurately modeling x-ray detector DQE(f) with completion times on the order of several minutes using a single CPU. Convenience of simulation can be achieved using GEANT4 which offers both gamma and optical photon transport capabilities.« less

  12. Photonic crystal fiber-generated coherent supercontinuum for fast stain-free histopathology and intraoperative multiphoton imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tu, Haohua; You, Sixian; Sun, Yi; Spillman, Darold R.; Ray, Partha S.; Liu, George; Boppart, Stephen A.

    2017-03-01

    In contrast to a broadband Ti:sapphire laser that mode locks a continuum of emission and enables broadband biophotonic applications, supercontinuum generation moves the spectral broadening outside the laser cavity into a nonlinear medium, and may thus improve environmental stability and more readily enable clinical translation. Using a photonic crystal fiber for passive spectral broadening, this technique becomes widely accessible from a narrowband fixed-wavelength mode-locked laser. Currently, fiber supercontinuum sources have benefited single-photon biological imaging modalities, including light-sheet or confocal microscopy, diffuse optical tomography, and retinal optical coherence tomography. However, they have not fully benefited multiphoton biological imaging modalities with proven capability for high-resolution label-free molecular imaging. The reason can be attributed to the amplitude/phase noise of fiber supercontinuum, which is amplified from the intrinsic noise of the input laser and responsible for spectral decoherence. This instability deteriorates the performance of multiphoton imaging modalities more than that of single-photon imaging modalities. Building upon a framework of coherent fiber supercontinuum generation, we have avoided this instability or decoherence, and balanced the often conflicting needs to generate strong signal, prevent sample photodamage, minimize background noise, accelerate imaging speed, improve imaging depth, accommodate different modalities, and provide user-friendly operation. Our prototypical platforms have enabled fast stain-free histopathology of fresh tissue in both laboratory and intraoperative settings to discover a wide variety of imaging-based cancer biomarkers, which may reduce the cost and waiting stress associated with disease/cancer diagnosis. A clear path toward intraoperative multiphoton imaging can be envisioned to help pathologists and surgeons improve cancer surgery.

  13. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. Themore » depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)« less

  14. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of themore » co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).« less

  15. Pulse-coupled neural network sensor fusion

    NASA Astrophysics Data System (ADS)

    Johnson, John L.; Schamschula, Marius P.; Inguva, Ramarao; Caulfield, H. John

    1998-03-01

    Perception is assisted by sensed impressions of the outside world but not determined by them. The primary organ of perception is the brain and, in particular, the cortex. With that in mind, we have sought to see how a computer-modeled cortex--the PCNN or Pulse Coupled Neural Network--performs as a sensor fusing element. In essence, the PCNN is comprised of an array of integrate-and-fire neurons with one neuron for each input pixel. In such a system, the neurons corresponding to bright pixels reach firing threshold faster than the neurons corresponding to duller pixels. Thus, firing rate is proportional to brightness. In PCNNs, when a neuron fires it sends some of the resulting signal to its neighbors. This linking can cause a near-threshold neuron to fire earlier than it would have otherwise. This leads to synchronization of the pulses across large regions of the image. We can simplify the 3D PCNN output by integrating out the time dimension. Over a long enough time interval, the resulting 2D (x,y) pattern IS the input image. The PCNN has taken it apart and put it back together again. The shorter- term time integrals are interesting in themselves and will be commented upon in the paper. The main thrust of this paper is the use of multiple PCNNs mutually coupled in various ways to assemble a single 2D pattern or fused image. Results of experiments on PCNN image fusion and an evaluation of its advantages are our primary objectives.

  16. Comparative assessment of pressure field reconstructions from particle image velocimetry measurements and Lagrangian particle tracking

    NASA Astrophysics Data System (ADS)

    van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.

    2017-04-01

    A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.

  17. Increasing the space-time product of super-resolution structured illumination microscopy by means of two-pattern illumination

    NASA Astrophysics Data System (ADS)

    Inochkin, F. M.; Pozzi, P.; Bezzubik, V. V.; Belashenkov, N. R.

    2017-06-01

    Superresolution image reconstruction method based on the structured illumination microscopy (SIM) principle with reduced and simplified pattern set is presented. The method described needs only 2 sinusoidal patterns shifted by half a period for each spatial direction of reconstruction, instead of the minimum of 3 for the previously known methods. The method is based on estimating redundant frequency components in the acquired set of modulated images. Digital processing is based on linear operations. When applied to several spatial orientations, the image set can be further reduced to a single pattern for each spatial orientation, complemented by a single non-modulated image for all the orientations. By utilizing this method for the case of two spatial orientations, the total input image set is reduced up to 3 images, providing up to 2-fold improvement in data acquisition time compared to the conventional 3-pattern SIM method. Using the simplified pattern design, the field of view can be doubled with the same number of spatial light modulator raster elements, resulting in a total 4-fold increase in the space-time product. The method requires precise knowledge of the optical transfer function (OTF). The key limitation is the thickness of object layer that scatters or emits light, which requires to be sufficiently small relatively to the lens depth of field. Numerical simulations and experimental results are presented. Experimental results are obtained on the SIM setup with the spatial light modulator based on the 1920x1080 digital micromirror device.

  18. Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.

    PubMed

    Wang, Dang-wei; Ma, Xiao-yan; Su, Yi

    2010-05-01

    This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.

  19. 13-fold resolution gain through turbid layer via translated unknown speckle illumination

    PubMed Central

    Guo, Kaikai; Zhang, Zibang; Jiang, Shaowei; Liao, Jun; Zhong, Jingang; Eldar, Yonina C.; Zheng, Guoan

    2017-01-01

    Fluorescence imaging through a turbid layer holds great promise for various biophotonics applications. Conventional wavefront shaping techniques aim to create and scan a focus spot through the turbid layer. Finding the correct input wavefront without direct access to the target plane remains a critical challenge. In this paper, we explore a new strategy for imaging through turbid layer with a large field of view. In our setup, a fluorescence sample is sandwiched between two turbid layers. Instead of generating one focus spot via wavefront shaping, we use an unshaped beam to illuminate the turbid layer and generate an unknown speckle pattern at the target plane over a wide field of view. By tilting the input wavefront, we raster scan the unknown speckle pattern via the memory effect and capture the corresponding low-resolution fluorescence images through the turbid layer. Different from the wavefront-shaping-based single-spot scanning, the proposed approach employs many spots (i.e., speckles) in parallel for extending the field of view. Based on all captured images, we jointly recover the fluorescence object, the unknown optical transfer function of the turbid layer, the translated step size, and the unknown speckle pattern. Without direct access to the object plane or knowledge of the turbid layer, we demonstrate a 13-fold resolution gain through the turbid layer using the reported strategy. We also demonstrate the use of this technique to improve the resolution of a low numerical aperture objective lens allowing to obtain both large field of view and high resolution at the same time. The reported method provides insight for developing new fluorescence imaging platforms and may find applications in deep-tissue imaging. PMID:29359102

  20. Large-memory real-time multichannel multiplexed pattern recognition

    NASA Technical Reports Server (NTRS)

    Gregory, D. A.; Liu, H. K.

    1984-01-01

    The principle and experimental design of a real-time multichannel multiplexed optical pattern recognition system via use of a 25-focus dichromated gelatin holographic lens (hololens) are described. Each of the 25 foci of the hololens may have a storage and matched filtering capability approaching that of a single-lens correlator. If the space-bandwidth product of an input image is limited, as is true in most practical cases, the 25-focus hololens system has 25 times the capability of a single lens. Experimental results have shown that the interfilter noise is not serious. The system has already demonstrated the storage and recognition of over 70 matched filters - which is a larger capacity than any optical pattern recognition system reported to date.

  1. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  2. Effects of spatial resolution ratio in image fusion

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2008-01-01

    In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.

  3. Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.

    1993-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.

  4. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  5. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm Using Probabilistic Boolean Logic Applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  6. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    PubMed Central

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  7. A text input system developed by using lips image recognition based LabVIEW for the seriously disabled.

    PubMed

    Chen, S C; Shao, C L; Liang, C K; Lin, S W; Huang, T H; Hsieh, M C; Yang, C H; Luo, C H; Wuo, C M

    2004-01-01

    In this paper, we present a text input system for the seriously disabled by using lips image recognition based on LabVIEW. This system can be divided into the software subsystem and the hardware subsystem. In the software subsystem, we adopted the technique of image processing to recognize the status of mouth-opened or mouth-closed depending the relative distance between the upper lip and the lower lip. In the hardware subsystem, parallel port built in PC is used to transmit the recognized result of mouth status to the Morse-code text input system. Integrating the software subsystem with the hardware subsystem, we implement a text input system by using lips image recognition programmed in LabVIEW language. We hope the system can help the seriously disabled to communicate with normal people more easily.

  8. Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System

    NASA Technical Reports Server (NTRS)

    Lin, Tsung Han (Hank)

    2011-01-01

    JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.

  9. Wavelet denoising of multiframe optical coherence tomography data

    PubMed Central

    Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2012-01-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103

  10. Wavelet denoising of multiframe optical coherence tomography data.

    PubMed

    Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2012-03-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.

  11. Coherent diffractive imaging of solid state reactions in zinc oxide crystals

    NASA Astrophysics Data System (ADS)

    Leake, Steven J.; Harder, Ross; Robinson, Ian K.

    2011-11-01

    We investigated the doping of zinc oxide (ZnO) microcrystals with iron and nickel via in situ coherent x-ray diffractive imaging (CXDI) in vacuum. Evaporated thin metal films were deposited onto the ZnO microcrystals. A single crystal was selected and tracked through annealing cycles. A solid state reaction was observed in both iron and nickel experiments using CXDI. A combination of the shrink wrap and guided hybrid-input-output phasing methods were applied to retrieve the electron density. The resolution was 33 nm (half order) determined via the phase retrieval transfer function. The resulting images are nevertheless sensitive to sub-angstrom displacements. The exterior of the microcrystal was found to degrade dramatically. The annealing of ZnO microcrystals coated with metal thin films proved an unsuitable doping method. In addition the observed defect structure of one crystal was attributed to the presence of an array of defects and was found to change upon annealing.

  12. Pulse-echo ultrasonic imaging method for eliminating sample thickness variation effects

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1995-01-01

    A pulse-echo, immersion method for ultrasonic evaluation of a material is discussed. It accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer, automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjusments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.

  13. Optical to optical interface device

    NASA Technical Reports Server (NTRS)

    Oliver, D. S.; Vohl, P.; Nisenson, P.

    1972-01-01

    The development, fabrication, and testing of a preliminary model of an optical-to-optical (noncoherent-to-coherent) interface device for use in coherent optical parallel processing systems are described. The developed device demonstrates a capability for accepting as an input a scene illuminated by a noncoherent radiation source and providing as an output a coherent light beam spatially modulated to represent the original noncoherent scene. The converter device developed under this contract employs a Pockels readout optical modulator (PROM). This is a photosensitive electro-optic element which can sense and electrostatically store optical images. The stored images can be simultaneously or subsequently readout optically by utilizing the electrostatic storage pattern to control an electro-optic light modulating property of the PROM. The readout process is parallel as no scanning mechanism is required. The PROM provides the functions of optical image sensing, modulation, and storage in a single active material.

  14. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †

    PubMed Central

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599

  15. Architecture of the parallel hierarchical network for fast image recognition

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule

    2016-09-01

    Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.

  16. Wavelength meter having single mode fiber optics multiplexed inputs

    DOEpatents

    Hackel, R.P.; Paris, R.D.; Feldman, M.

    1993-02-23

    A wavelength meter having a single mode fiber optics input is disclosed. The single mode fiber enables a plurality of laser beams to be multiplexed to form a multiplexed input to the wavelength meter. The wavelength meter can provide a determination of the wavelength of any one or all of the plurality of laser beams by suitable processing. Another aspect of the present invention is that one of the laser beams could be a known reference laser having a predetermined wavelength. Hence, the improved wavelength meter can provide an on-line calibration capability with the reference laser input as one of the plurality of laser beams.

  17. Wavelength meter having single mode fiber optics multiplexed inputs

    DOEpatents

    Hackel, Richard P.; Paris, Robert D.; Feldman, Mark

    1993-01-01

    A wavelength meter having a single mode fiber optics input is disclosed. The single mode fiber enables a plurality of laser beams to be multiplexed to form a multiplexed input to the wavelength meter. The wavelength meter can provide a determination of the wavelength of any one or all of the plurality of laser beams by suitable processing. Another aspect of the present invention is that one of the laser beams could be a known reference laser having a predetermined wavelength. Hence, the improved wavelength meter can provide an on-line calibration capability with the reference laser input as one of the plurality of laser beams.

  18. Quantum design rules for single molecule logic gates.

    PubMed

    Renaud, N; Hliwa, M; Joachim, C

    2011-08-28

    Recent publications have demonstrated how to implement a NOR logic gate with a single molecule using its interaction with two surface atoms as logical inputs [W. Soe et al., ACS Nano, 2011, 5, 1436]. We demonstrate here how this NOR logic gate belongs to the general family of quantum logic gates where the Boolean truth table results from a full control of the quantum trajectory of the electron transfer process through the molecule by very local and classical inputs practiced on the molecule. A new molecule OR gate is proposed for the logical inputs to be also single metal atoms, one per logical input.

  19. Micromachined mirrors for raster-scanning displays and optical fiber switches

    NASA Astrophysics Data System (ADS)

    Hagelin, Paul Merritt

    Micromachines and micro-optics have the potential to shrink the size and cost of free-space optical systems, enabling a new generation of high-performance, compact projection displays and telecommunications equipment. In raster-scanning displays and optical fiber switches, a free-space optical beam can interact with multiple tilt- up micromirrors fabricated on a single substrate. The size, rotation angle, and flatness of the mirror surfaces determine the number of pixels in a raster-display or ports in an optical switch. Single-chip and two-chip optical raster display systems demonstrate static mirror curvature correction, an integrated electronic driver board, and dynamic micromirror performance. Correction for curvature caused by a stress gradient in the micromirror leads to resolution of 102 by 119 pixels in the single-chip display. The optical design of the two-chip display features in-situ mirror curvature measurement and adjustable image magnification with a single output lens. An electronic driver board synchronizes modulation of the optical source with micromirror actuation for the display of images. Dynamic off-axis mirror motion is shown to have minimal influence on resolution. The confocal switch, a free-space optical fiber cross- connect, incorporates micromirrors having a design similar to the image-refresh scanner. Two micromirror arrays redirect optical beams from an input fiber array to the output fibers. The switch architecture supports simultaneous switching of multiple wavelength channels. A 2x2 switch configuration, using single-mode optical fiber at 1550 mn, is demonstrated with insertion loss of -4.2 dB and cross-talk of -50.5 dB. The micromirrors have sufficient size and angular range for scaling to a 32x32 cross-connect switch that has low insertion-loss and low cross-talk.

  20. A novel structure-aware sparse learning algorithm for brain imaging genetics.

    PubMed

    Du, Lei; Jingwen, Yan; Kim, Sungeun; Risacher, Shannon L; Huang, Heng; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2014-01-01

    Brain imaging genetics is an emergent research field where the association between genetic variations such as single nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is evaluated. Sparse canonical correlation analysis (SCCA) is a bi-multivariate analysis method that has the potential to reveal complex multi-SNP-multi-QT associations. Most existing SCCA algorithms are designed using the soft threshold strategy, which assumes that the features in the data are independent from each other. This independence assumption usually does not hold in imaging genetic data, and thus inevitably limits the capability of yielding optimal solutions. We propose a novel structure-aware SCCA (denoted as S2CCA) algorithm to not only eliminate the independence assumption for the input data, but also incorporate group-like structure in the model. Empirical comparison with a widely used SCCA implementation, on both simulated and real imaging genetic data, demonstrated that S2CCA could yield improved prediction performance and biologically meaningful findings.

  1. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  2. Photonic lantern adaptive spatial mode control in LMA fiber amplifiers.

    PubMed

    Montoya, Juan; Aleshire, Chris; Hwang, Christopher; Fontaine, Nicolas K; Velázquez-Benítez, Amado; Martz, Dale H; Fan, T Y; Ripin, Dan

    2016-02-22

    We demonstrate adaptive-spatial mode control (ASMC) in few-moded double-clad large mode area (LMA) fiber amplifiers by using an all-fiber-based photonic lantern. Three single-mode fiber inputs are used to adaptively inject the appropriate superposition of input modes in a multimode gain fiber to achieve the desired mode at the output. By actively adjusting the relative phase of the single-mode inputs, near-unity coherent combination resulting in a single fundamental mode at the output is achieved.

  3. Detecting aircraft with a low-resolution infrared sensor.

    PubMed

    Jakubowicz, Jérémie; Lefebvre, Sidonie; Maire, Florian; Moulines, Eric

    2012-06-01

    Existing computer simulations of aircraft infrared signature (IRS) do not account for dispersion induced by uncertainty on input data, such as aircraft aspect angles and meteorological conditions. As a result, they are of little use to estimate the detection performance of IR optronic systems; in this case, the scenario encompasses a lot of possible situations that must be indeed addressed, but cannot be singly simulated. In this paper, we focus on low-resolution infrared sensors and we propose a methodological approach for predicting simulated IRS dispersion of poorly known aircraft and performing aircraft detection on the resulting set of low-resolution infrared images. It is based on a sensitivity analysis, which identifies inputs that have negligible influence on the computed IRS and can be set at a constant value, on a quasi-Monte Carlo survey of the code output dispersion, and on a new detection test taking advantage of level sets estimation. This method is illustrated in a typical scenario, i.e., a daylight air-to-ground full-frontal attack by a generic combat aircraft flying at low altitude, over a database of 90,000 simulated aircraft images. Assuming a white noise or a fractional Brownian background model, detection performances are very promising.

  4. On-chip skin color detection using a triple-well CMOS process

    NASA Astrophysics Data System (ADS)

    Boussaid, Farid; Chai, Douglas; Bouzerdoum, Abdesselam

    2004-03-01

    In this paper, a current-mode VLSI architecture enabling on read-out skin detection without the need for any on-chip memory elements is proposed. An important feature of the proposed architecture is that it removes the need for demosaicing. Color separation is achieved using the strong wavelength dependence of the absorption coefficient in silicon. This wavelength dependence causes a very shallow absorption of blue light and enables red light to penetrate deeply in silicon. A triple-well process, allowing a P-well to be placed inside an N-well, is chosen to fabricate three vertically integrated photodiodes acting as the RGB color detector for each pixel. Pixels of an input RGB image are classified as skin or non-skin pixels using a statistical skin color model, chosen to offer an acceptable trade-off between skin detection performance and implementation complexity. A single processing unit is used to classify all pixels of the input RGB image. This results in reduced mismatch and also in an increased pixel fill-factor. Furthermore, the proposed current-mode architecture is programmable, allowing external control of all classifier parameters to compensate for mismatch and changing lighting conditions.

  5. Confluence or independence of microwave plasma bullets in atmospheric argon plasma jet plumes

    NASA Astrophysics Data System (ADS)

    Li, Ping; Chen, Zhaoquan; Mu, Haibao; Xu, Guimin; Yao, Congwei; Sun, Anbang; Zhou, Yuming; Zhang, Guanjun

    2018-03-01

    Plasma bullet is the formation and propagation of a guided ionization wave (streamer), normally generated in atmospheric pressure plasma jet (APPJ). In most cases, only an ionization front produces in a dielectric tube. The present study shows that two or three ionization fronts can be generated in a single quartz tube by using a microwave coaxial resonator. The argon APPJ plumes with a maximum length of 170 mm can be driven by continuous microwaves or microwave pulses. When the input power is higher than 90 W, two or three ionization fronts propagate independently at first; thereafter, they confluence to form a central plasma jet plume. On the other hand, the plasma bullets move independently as the lower input power is applied. For pulsed microwave discharges, the discharge images captured by a fast camera show the ionization process in detail. Another interesting finding is that the strongest lightening plasma jet plumes always appear at the shrinking phase. Both the discharge images and electromagnetic simulations suggest that the confluence or independent propagation of plasma bullets is resonantly excited by the local enhanced electric fields, in terms of wave modes of traveling surface plasmon polaritons.

  6. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  7. Dense periodical patterns in photonic devices: Technology for fabrication and device performance

    NASA Astrophysics Data System (ADS)

    Chandramohan, Sabarish

    For the fabrication, focused ion beam parameters are investigated to successfully fabricate dense periodical patterns, such as gratings, on hard transition metal nitride such as zirconium nitride. Transition metal nitrides such as titanium nitride and zirconium nitride have recently been studied as alternative materials for plasmonic devices because of its plasmonic resonance in the visible and near-infrared ranges, material strength, CMOS compatibility and optical properties resembling gold. Coupling of light on the surface of these materials using sub-micrometer gratings gives additional capabilities for wider applications. Here we report the fabrication of gratings on the surface of zirconium nitride using gallium ion 30keV dual beam focused ion beam. Scanning electron microscope imaging and atomic force microscope profiling is used to characterize the fabricated gratings. Appropriate values for FIB parameters such as ion beam current, magnification, dwell time and milling rate are found for successful milling of dense patterns on zirconium nitride. For the device performance, a real-time image-processing algorithm is developed to enhance the sensitivity of an optical miniature spectrometer. The novel approach in this design is the use of real-time image-processing algorithm to average the image intensity along the arc shaped images registered by the monochromatic inputs on the CMOS image sensor. This approach helps to collect light from the entire arc and thus enhances the sensitivity of the device. The algorithm is developed using SiTiO2 planar waveguide. The accuracy of the mapping from x-pixel number scale of the CMOS image sensor to the wavelength spectra of the miniature spectrometer is demonstrated by measuring the spectrum of a known LED source using a conventional desktop spectrometer and comparing it with the spectrum measured by the miniature spectrometer. The sensitivity of miniature spectrometer is demonstrated using two methods. In the first method, the input laser power is attenuated to 0.1 nW and the spectra is measured using the miniature spectrometer. Even at low input power of 0.1nW, the spectrum of monochromatic inputs is observed well above the noise level. Second method is by quantitative analysis, which measures the absorption of CdSeS/ZnS quantum dots drop casted between the gratings of Ta2O5 planar single-mode waveguide. The expected guided mode attenuation introduced by monolayer of quantum dots is found to be approximately 11 times above the highest noise level from the absorption measurements. Thus, the miniature spectrometer is capable of detecting the signal from the noise level even with the absorption introduced by monolayer of quantum dots.

  8. Tracer Kinetic Analysis of (S)-¹⁸F-THK5117 as a PET Tracer for Assessing Tau Pathology.

    PubMed

    Jonasson, My; Wall, Anders; Chiotis, Konstantinos; Saint-Aubert, Laure; Wilking, Helena; Sprycha, Margareta; Borg, Beatrice; Thibblin, Alf; Eriksson, Jonas; Sörensen, Jens; Antoni, Gunnar; Nordberg, Agneta; Lubberink, Mark

    2016-04-01

    Because a correlation between tau pathology and the clinical symptoms of Alzheimer disease (AD) has been hypothesized, there is increasing interest in developing PET tracers that bind specifically to tau protein. The aim of this study was to evaluate tracer kinetic models for quantitative analysis and generation of parametric images for the novel tau ligand (S)-(18)F-THK5117. Nine subjects (5 with AD, 4 with mild cognitive impairment) received a 90-min dynamic (S)-(18)F-THK5117 PET scan. Arterial blood was sampled for measurement of blood radioactivity and metabolite analysis. Volume-of-interest (VOI)-based analysis was performed using plasma-input models; single-tissue and 2-tissue (2TCM) compartment models and plasma-input Logan and reference tissue models; and simplified reference tissue model (SRTM), reference Logan, and SUV ratio (SUVr). Cerebellum gray matter was used as the reference region. Voxel-level analysis was performed using basis function implementations of SRTM, reference Logan, and SUVr. Regionally averaged voxel values were compared with VOI-based values from the optimal reference tissue model, and simulations were made to assess accuracy and precision. In addition to 90 min, initial 40- and 60-min data were analyzed. Plasma-input Logan distribution volume ratio (DVR)-1 values agreed well with 2TCM DVR-1 values (R(2)= 0.99, slope = 0.96). SRTM binding potential (BP(ND)) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 (R(2)= 1.00, slope ≈ 1.00) whereas SUVr(70-90)-1 values correlated less well and overestimated binding. Agreement between parametric methods and SRTM was best for reference Logan (R(2)= 0.99, slope = 1.03). SUVr(70-90)-1 values were almost 3 times higher than BP(ND) values in white matter and 1.5 times higher in gray matter. Simulations showed poorer accuracy and precision for SUVr(70-90)-1 values than for the other reference methods. SRTM BP(ND) and reference Logan DVR-1 values were not affected by a shorter scan duration of 60 min. SRTM BP(ND) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 values. VOI-based data analyses indicated robust results for scan durations of 60 min. Reference Logan generated quantitative (S)-(18)F-THK5117 DVR-1 parametric images with the greatest accuracy and precision and with a much lower white-matter signal than seen with SUVr(70-90)-1 images. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  9. Single lens laser beam shaper

    DOEpatents

    Liu, Chuyu [Newport News, VA; Zhang, Shukui [Yorktown, VA

    2011-10-04

    A single lens bullet-shaped laser beam shaper capable of redistributing an arbitrary beam profile into any desired output profile comprising a unitary lens comprising: a convex front input surface defining a focal point and a flat output portion at the focal point; and b) a cylindrical core portion having a flat input surface coincident with the flat output portion of the first input portion at the focal point and a convex rear output surface remote from the convex front input surface.

  10. Comparison of the Diagnostic Accuracy of DSC- and Dynamic Contrast-Enhanced MRI in the Preoperative Grading of Astrocytomas.

    PubMed

    Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G

    2015-11-01

    Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.

  11. Detection of Neuron Membranes in Electron Microscopy Images Using Multi-scale Context and Radon-Like Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.

    2011-10-01

    Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less

  12. High Resolution Live Cell Raman Imaging Using Subcellular Organelle-Targeting SERS-Sensitive Gold Nanoparticles with Highly Narrow Intra-Nanogap

    PubMed Central

    Kang, Jeon Woong; So, Peter T. C.; Dasari, Ramachandra R.; Lim, Dong-Kwon

    2015-01-01

    We report a method to achieve high speed and high resolution live cell Raman images using small spherical gold nanoparticles with highly narrow intra-nanogap structures responding to NIR excitation (785 nm) and high-speed confocal Raman microscopy. The three different Raman-active molecules placed in the narrow intra-nanogap showed a strong and uniform Raman intensity in solution even under transient exposure time (10 ms) and low input power of incident laser (200 μW), which lead to obtain high-resolution single cell image within 30 s without inducing significant cell damage. The high resolution Raman image showed the distributions of gold nanoparticles for their targeted sites such as cytoplasm, mitochondria, or nucleus. The high speed Raman-based live cell imaging allowed us to monitor rapidly changing cell morphologies during cell death induced by the addition of highly toxic KCN solution to cells. These results strongly suggest that the use of SERS-active nanoparticle can greatly improve the current temporal resolution and image quality of Raman-based cell images enough to obtain the detailed cell dynamics and/or the responses of cells to potential drug molecules. PMID:25646716

  13. Accurate and robust brain image alignment using boundary-based registration.

    PubMed

    Greve, Douglas N; Fischl, Bruce

    2009-10-15

    The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.

  14. Dreaming and offline memory processing.

    PubMed

    Wamsley, Erin J; Stickgold, Robert

    2010-12-07

    The activities of the mind and brain never cease. Although many of our waking hours are spent processing sensory input and executing behavioral responses, moments of unoccupied rest free us to wander through thoughts of the past and future, create daydreams, and imagine fictitious scenarios. During sleep, when attention to sensory input is at a minimum, the mind continues to process information, using memory fragments to create the images, thoughts, and narratives that we commonly call 'dreaming'. Far from being a random or meaningless distraction, spontaneous cognition during states of sleep and resting wakefulness appears to serve important functions related to processing past memories and planning for the future. From single-cell recordings in rodents to behavioral studies in humans, recent studies in the neurosciences suggest a new conception of dreaming as part of a continuum of adaptive cognitive processing occurring across the full range of mind/brain states. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. An investigation into the effects of temporal resolution on hepatic dynamic contrast-enhanced MRI in volunteers and in patients with hepatocellular carcinoma

    NASA Astrophysics Data System (ADS)

    Gill, Andrew B.; Black, Richard T.; Bowden, David J.; Priest, Andrew N.; Graves, Martin J.; Lomas, David J.

    2014-06-01

    This study investigated the effect of temporal resolution on the dual-input pharmacokinetic (PK) modelling of dynamic contrast-enhanced MRI (DCE-MRI) data from normal volunteer livers and from patients with hepatocellular carcinoma. Eleven volunteers and five patients were examined at 3 T. Two sections, one optimized for the vascular input functions (VIF) and one for the tissue, were imaged within a single heart-beat (HB) using a saturation-recovery fast gradient echo sequence. The data was analysed using a dual-input single-compartment PK model. The VIFs and/or uptake curves were then temporally sub-sampled (at interval ▵t = [2-20] s) before being subject to the same PK analysis. Statistical comparisons of tumour and normal tissue PK parameter values using a 5% significance level gave rise to the same study results when temporally sub-sampling the VIFs to HB < ▵t <4 s. However, sub-sampling to ▵t > 4 s did adversely affect the statistical comparisons. Temporal sub-sampling of just the liver/tumour tissue uptake curves at ▵t ≤ 20 s, whilst using high temporal resolution VIFs, did not substantially affect PK parameter statistical comparisons. In conclusion, there is no practical advantage to be gained from acquiring very high temporal resolution hepatic DCE-MRI data. Instead the high temporal resolution could be usefully traded for increased spatial resolution or SNR.

  16. Optical control demonstrates switch-like PIP3 dynamics underlying the initiation of immune cell migration

    PubMed Central

    Karunarathne, W. K. Ajith; Giri, Lopamudra; Patel, Anilkumar K.; Venkatesh, Kareenhalli V.; Gautam, N.

    2013-01-01

    There is a dearth of approaches to experimentally direct cell migration by continuously varying signal input to a single cell, evoking all possible migratory responses and quantitatively monitoring the cellular and molecular response dynamics. Here we used a visual blue opsin to recruit the endogenous G-protein network that mediates immune cell migration. Specific optical inputs to this optical trigger of signaling helped steer migration in all possible directions with precision. Spectrally selective imaging was used to monitor cell-wide phosphatidylinositol (3,4,5)-triphosphate (PIP3), cytoskeletal, and cellular dynamics. A switch-like PIP3 increase at the cell front and a decrease at the back were identified, underlying the decisive migratory response. Migration was initiated at the rapidly increasing switch stage of PIP3 dynamics. This result explains how a migratory cell filters background fluctuations in the intensity of an extracellular signal but responds by initiating directionally sensitive migration to a persistent signal gradient across the cell. A two-compartment computational model incorporating a localized activator that is antagonistic to a diffusible inhibitor was able to simulate the switch-like PIP3 response. It was also able simulate the slow dissipation of PIP3 on signal termination. The ability to independently apply similar signaling inputs to single cells detected two cell populations with distinct thresholds for migration initiation. Overall the optical approach here can be applied to understand G-protein–coupled receptor network control of other cell behaviors. PMID:23569254

  17. Optical control demonstrates switch-like PIP3 dynamics underlying the initiation of immune cell migration.

    PubMed

    Karunarathne, W K Ajith; Giri, Lopamudra; Patel, Anilkumar K; Venkatesh, Kareenhalli V; Gautam, N

    2013-04-23

    There is a dearth of approaches to experimentally direct cell migration by continuously varying signal input to a single cell, evoking all possible migratory responses and quantitatively monitoring the cellular and molecular response dynamics. Here we used a visual blue opsin to recruit the endogenous G-protein network that mediates immune cell migration. Specific optical inputs to this optical trigger of signaling helped steer migration in all possible directions with precision. Spectrally selective imaging was used to monitor cell-wide phosphatidylinositol (3,4,5)-triphosphate (PIP3), cytoskeletal, and cellular dynamics. A switch-like PIP3 increase at the cell front and a decrease at the back were identified, underlying the decisive migratory response. Migration was initiated at the rapidly increasing switch stage of PIP3 dynamics. This result explains how a migratory cell filters background fluctuations in the intensity of an extracellular signal but responds by initiating directionally sensitive migration to a persistent signal gradient across the cell. A two-compartment computational model incorporating a localized activator that is antagonistic to a diffusible inhibitor was able to simulate the switch-like PIP3 response. It was also able simulate the slow dissipation of PIP3 on signal termination. The ability to independently apply similar signaling inputs to single cells detected two cell populations with distinct thresholds for migration initiation. Overall the optical approach here can be applied to understand G-protein-coupled receptor network control of other cell behaviors.

  18. Object knowledge changes visual appearance: semantic effects on color afterimages.

    PubMed

    Lupyan, Gary

    2015-10-01

    According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Prototype Focal-Plane-Array Optoelectronic Image Processor

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey

    1995-01-01

    Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.

  20. Unimolecular Logic Gate with Classical Input by Single Gold Atoms.

    PubMed

    Skidin, Dmitry; Faizy, Omid; Krüger, Justus; Eisenhut, Frank; Jancarik, Andrej; Nguyen, Khanh-Hung; Cuniberti, Gianaurelio; Gourdon, Andre; Moresco, Francesca; Joachim, Christian

    2018-02-27

    By a combination of solution and on-surface chemistry, we synthesized an asymmetric starphene molecule with two long anthracenyl input branches and a short naphthyl output branch on the Au(111) surface. Starting from this molecule, we could demonstrate the working principle of a single molecule NAND logic gate by selectively contacting single gold atoms by atomic manipulation to the longer branches of the molecule. The logical input "1" ("0") is defined by the interaction (noninteraction) of a gold atom with one of the input branches. The output is measured by scanning tunneling spectroscopy following the shift in energy of the electronic tunneling resonances at the end of the short branch of the molecule.

  1. Visualization of local Ca2+ dynamics with genetically encoded bioluminescent reporters.

    PubMed

    Rogers, Kelly L; Stinnakre, Jacques; Agulhon, Cendra; Jublot, Delphine; Shorte, Spencer L; Kremer, Eric J; Brûlet, Philippe

    2005-02-01

    Measurements of local Ca2+ signalling at different developmental stages and/or in specific cell types is important for understanding aspects of brain functioning. The use of light excitation in fluorescence imaging can cause phototoxicity, photobleaching and auto-fluorescence. In contrast, bioluminescence does not require the input of radiative energy and can therefore be measured over long periods, with very high temporal resolution. Aequorin is a genetically encoded Ca(2+)-sensitive bioluminescent protein, however, its low quantum yield prevents dynamic measurements of Ca2+ responses in single cells. To overcome this limitation, we recently reported the bi-functional Ca2+ reporter gene, GFP-aequorin (GA), which was developed specifically to improve the light output and stability of aequorin chimeras [V. Baubet, et al., (2000) PNAS, 97, 7260-7265]. In the current study, we have genetically targeted GA to different microdomains important in synaptic transmission, including to the mitochondrial matrix, endoplasmic reticulum, synaptic vesicles and to the postsynaptic density. We demonstrate that these reporters enable 'real-time' measurements of subcellular Ca2+ changes in single mammalian neurons using bioluminescence. The high signal-to-noise ratio of these reporters is also important in that it affords the visualization of Ca2+ dynamics in cell-cell communication in neuronal cultures and tissue slices. Further, we demonstrate the utility of this approach in ex-vivo preparations of mammalian retina, a paradigm in which external light input should be controlled. This represents a novel molecular imaging approach for non-invasive monitoring of local Ca2+ dynamics and cellular communication in tissue or whole animal studies.

  2. Ultra-wideband receiver

    DOEpatents

    McEwan, Thomas E.

    1994-01-01

    An ultra-wideband (UWB) receiver utilizes a strobed input line with a sampler connected to an amplifier. In a differential configuration, .+-.UWB inputs are connected to separate antennas or to two halves of a dipole antenna. The two input lines include samplers which are commonly strobed by a gating pulse with a very low duty cycle. In a single ended configuration, only a single strobed input line and sampler is utilized. The samplers integrate, or average, up to 10,000 pulses to achieve high sensitivity and good rejection of uncorrelated signals.

  3. Ultra-wideband receiver

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    An ultra-wideband (UWB) receiver utilizes a strobed input line with a sampler connected to an amplifier. In a differential configuration, .+-.UWB inputs are connected to separate antennas or to two halves of a dipole antenna. The two input lines include samplers which are commonly strobed by a gating pulse with a very low duty cycle. In a single ended configuration, only a single strobed input line and sampler is utilized. The samplers integrate, or average, up to 10,000 pulses to achieve high sensitivity and good rejection of uncorrelated signals.

  4. Urban area delineation and detection of change along the urban-rural boundary as derived from LANDSAT digital data

    NASA Technical Reports Server (NTRS)

    Christenson, J. W.; Lachowski, H. M.

    1977-01-01

    LANDSAT digital multispectral scanner data, in conjunction with supporting ground truth, were investigated to determine their utility in delineation of urban-rural boundaries. The digital data for the metropolitan areas of Washington, D. C.; Austin, Texas; and Seattle, Washingtion; were processed using an interactive image processing system. Processing focused on identification of major land cover types typical of the zone of transition from urban to rural landscape, and definition of their spectral signatures. Census tract boundaries were input into the interactive image processing system along with the LANDSAT single and overlayed multiple date MSS data. Results of this investigation indicate that satellite collected information has a practical application to the problem of urban area delineation and to change detection.

  5. Classification and pose estimation of objects using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.

  6. Cascade of Solitonic Excitations in a Superfluid Fermi Gas: From Solitons and Vortex Rings to Solitonic Vortices

    NASA Astrophysics Data System (ADS)

    Ku, Mark; Mukherjee, Biswaroop; Yefsah, Tarik; Zwierlein, Martin

    2015-05-01

    We follow the evolution of a superfluid Fermi gas of 6Li atoms following a one-sided π phase imprint. Via tomographic imaging, we observe the formation of a planar dark soliton, and its subsequent snaking and decay into a vortex ring. The latter eventually breaks at the boundary of the superfluid, finally leaving behind a single, remnant solitonic vortex. The nodal surface is directly imaged and reveals its decay into a vortex ring via a puncture of the initial soliton plane. At intermediate stages we find evidence for more exotic structures resembling Φ-solitons. The observed evolution of the nodal surface represents dynamics that occurs at the length scale of the interparticle spacing, thus providing new experimental input for microscopic theories of strongly correlated fermions.

  7. Experimental image alignment system

    NASA Technical Reports Server (NTRS)

    Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.

    1980-01-01

    A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.

  8. 3D Clumped Cell Segmentation Using Curvature Based Seeded Watershed.

    PubMed

    Atta-Fosu, Thomas; Guo, Weihong; Jeter, Dana; Mizutani, Claudia M; Stopczynski, Nathan; Sousa-Neves, Rui

    2016-12-01

    Image segmentation is an important process that separates objects from the background and also from each other. Applied to cells, the results can be used for cell counting which is very important in medical diagnosis and treatment, and biological research that is often used by scientists and medical practitioners. Segmenting 3D confocal microscopy images containing cells of different shapes and sizes is still challenging as the nuclei are closely packed. The watershed transform provides an efficient tool in segmenting such nuclei provided a reasonable set of markers can be found in the image. In the presence of low-contrast variation or excessive noise in the given image, the watershed transform leads to over-segmentation (a single object is overly split into multiple objects). The traditional watershed uses the local minima of the input image and will characteristically find multiple minima in one object unless they are specified (marker-controlled watershed). An alternative to using the local minima is by a supervised technique called seeded watershed, which supplies single seeds to replace the minima for the objects. Consequently, the accuracy of a seeded watershed algorithm relies on the accuracy of the predefined seeds. In this paper, we present a segmentation approach based on the geometric morphological properties of the 'landscape' using curvatures. The curvatures are computed as the eigenvalues of the Shape matrix, producing accurate seeds that also inherit the original shape of their respective cells. We compare with some popular approaches and show the advantage of the proposed method.

  9. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-04-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  10. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    PubMed Central

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-01-01

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606

  11. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    PubMed

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  12. High-dynamic-range scene compression in humans

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2006-02-01

    Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.

  13. Impact of a single drop on the same liquid: formation, growth and disintegration of jets

    NASA Astrophysics Data System (ADS)

    Agbaglah, G. Gilou; Deegan, Robert

    2015-11-01

    One of the simplest splashing scenarios results from the impact of a single drop on on the same liquid. The traditional understanding of this process is that the impact generates a jet that later breaks up into secondary droplets. Recently it was shown that even this simplest of scenarios is more complicated than expected because multiple jets can be generated from a single impact event and there are bifurcations in the multiplicity of jets. First, we study the formation, growth and disintegration of jets following the impact of a drop on a thin film of the same liquid using a combination of numerical simulations and linear stability theory. We obtain scaling relations from our simulations and use these as inputs to our stability analysis. We also use experiments and numerical simulations of a single drop impacting on a deep pool to examine the bifurcation from a single jet into two jets. Using high speed X-ray imaging methods we show that vortex separation within the drop leads to the formation of a second jet long after the formation of the ejecta sheet.

  14. Relationship between fatigue of generation II image intensifier and input illumination

    NASA Astrophysics Data System (ADS)

    Chen, Qingyou

    1995-09-01

    If there is fatigue for an image intesifier, then it has an effect on the imaging property of the night vision system. In this paper, using the principle of Joule Heat, we derive a mathematical formula for the generated heat of semiconductor photocathode. We describe the relationship among the various parameters in the formula. We also discuss reasons for the fatigue of Generation II image intensifier caused by bigger input illumination.

  15. Improved automatic adjustment of density and contrast in FCR system using neural network

    NASA Astrophysics Data System (ADS)

    Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo

    1994-05-01

    FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.

  16. The GONG Farside Project

    NASA Astrophysics Data System (ADS)

    Leibacher, J. W.; Braun, D.; González Hernández, I.; Goodrich, J.; Kholikov, S.; Lindsey, C.; Malanushenko, A.; Scherrer, P.

    2005-05-01

    The GONG program is currently providing near-real-time helioseismic images of the farside of the Sun. The continuous stream of low resolution images, obtained from the 6 earth based GONG stations, are merged into a single data series that are the input to the farside pipeline. In order to validate the farside images, it is crucial to compare the results obtained from different instruments. We show comparisons between the farside images provided by the MDI instrument and the GONG ones. New aditions to the pipeline will allow us to create full-hemisphere farside images, examples of the latest are shown in this poster. Our efforts are now concentrated in calibrating the farside signal so it became a reliable solar activity forecasting tool. We are also testing single-skip acoustic power holography at 5-7 mHz as a prospective means of reinforcing the signatures of active regions crossing the the east and west limb and monitoring acoustic emission in the neighborhoods of Sun's the poles. This work utilizes data obtained by the Global Oscillation Network Group (GONG) Program, managed by the National Solar Observatory, which is operated by AURA, Inc. under a cooperative agreement with the National Science Foundation. The data were acquired by instruments operated by the Big Bear Solar Observatory, High Altitude Observatory, Learmonth Solar Observatory, Udaipur Solar Observatory, Instituto de Astrofisico de Canarias, and Cerro Tololo Interamerican Observatory, as well as the Michaelson Doppler Imager on SoHO, a mission of international cooperation between ESA and NASA. This work has been supported by the NASA Living with a Star - Targeted Research and Technology program.

  17. Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series

    NASA Astrophysics Data System (ADS)

    Champion, Nicolas

    2016-06-01

    Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.

  18. Quantitative assessment of multiple sclerosis lesion load using CAD and expert input

    NASA Astrophysics Data System (ADS)

    Gertych, Arkadiusz; Wong, Alexis; Sangnil, Alan; Liu, Brent J.

    2008-03-01

    Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on magnetic resonance (MR) images is clinically useful and provides information about the development and change reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL quantification strategies. We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological findings to original images according to current DICOM standard. CAD is also capable to display and track changes and make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good correlation of CAD-derived results vs. those incorporated as expert's reading. Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS progression tracking.

  19. Natural images dominate in binocular rivalry

    PubMed Central

    Baker, Daniel H.; Graf, Erich W.

    2009-01-01

    Ecological approaches to perception have demonstrated that information encoding by the visual system is informed by the natural environment, both in terms of simple image attributes like luminance and contrast, and more complex relationships corresponding to Gestalt principles of perceptual organization. Here, we ask if this optimization biases perception of visual inputs that are perceptually bistable. Using the binocular rivalry paradigm, we designed stimuli that varied in either their spatiotemporal amplitude spectra or their phase spectra. We found that noise stimuli with “natural” amplitude spectra (i.e., amplitude content proportional to 1/f, where f is spatial or temporal frequency) dominate over those with any other systematic spectral slope, along both spatial and temporal dimensions. This could not be explained by perceived contrast measurements, and occurred even though all stimuli had equal energy. Calculating the effective contrast following attenuation by a model contrast sensitivity function suggested that the strong contrast dependency of rivalry provides the mechanism by which binocular vision is optimized for viewing natural images. We also compared rivalry between natural and phase-scrambled images and found a strong preference for natural phase spectra that could not be accounted for by observer biases in a control task. We propose that this phase specificity relates to contour information, and arises either from the activity of V1 complex cells, or from later visual areas, consistent with recent neuroimaging and single-cell work. Our findings demonstrate that human vision integrates information across space, time, and phase to select the input most likely to hold behavioral relevance. PMID:19289828

  20. Retrieving the Height of Smoke and Dust Aerosols by Synergistic Use of VIIRS, OMPS, and CALIOP Observations

    NASA Technical Reports Server (NTRS)

    Lee, Jaehwa; Hsu, N. Christina; Bettenhausen, Corey; Sayer, Andrew M.; Seftor, Colin J.; Jeong, Myeong-Jae

    2015-01-01

    Aerosol Single scattering albedo and Height Estimation (ASHE) algorithm was first introduced in Jeong and Hsu (2008) to provide aerosol layer height as well as single scattering albedo (SSA) for biomass burning smoke aerosols. One of the advantages of this algorithm was that the aerosol layer height can be retrieved over broad areas, which had not been available from lidar observations only. The algorithm utilized aerosol properties from three different satellite sensors, i.e., aerosol optical depth (AOD) and Ångström exponent (AE) from Moderate Resolution Imaging Spectroradiometer (MODIS), UV aerosol index (UVAI) from Ozone Monitoring Instrument (OMI), and aerosol layer height from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP). Here, we extend the application of the algorithm to Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) data. We also now include dust layers as well as smoke. Other updates include improvements in retrieving the AOD of nonspherical dust from VIIRS, better determination of the aerosol layer height from CALIOP, and more realistic input aerosol profiles in the forward model for better accuracy.

  1. A highly sensitive fluorescent indicator dye for calcium imaging of neural activity in vitro and in vivo

    PubMed Central

    Tada, Mayumi; Takeuchi, Atsuya; Hashizume, Miki; Kitamura, Kazuo; Kano, Masanobu

    2014-01-01

    Calcium imaging of individual neurons is widely used for monitoring their activity in vitro and in vivo. Synthetic fluorescent calcium indicator dyes are commonly used, but the resulting calcium signals sometimes suffer from a low signal-to-noise ratio (SNR). Therefore, it is difficult to detect signals caused by single action potentials (APs) particularly from neurons in vivo. Here we showed that a recently developed calcium indicator dye, Cal-520, is sufficiently sensitive to reliably detect single APs both in vitro and in vivo. In neocortical neurons, calcium signals were linearly correlated with the number of APs, and the SNR was > 6 for in vitro slice preparations and > 1.6 for in vivo anesthetised mice. In cerebellar Purkinje cells, dendritic calcium transients evoked by climbing fiber inputs were clearly observed in anesthetised mice with a high SNR and fast decay time. These characteristics of Cal-520 are a great advantage over those of Oregon Green BAPTA-1, the most commonly used calcium indicator dye, for monitoring the activity of individual neurons both in vitro and in vivo. PMID:24405482

  2. Correlation of iodine uptake and perfusion parameters between dual-energy CT imaging and first-pass dual-input perfusion CT in lung cancer.

    PubMed

    Chen, Xiaoliang; Xu, Yanyan; Duan, Jianghui; Li, Chuandong; Sun, Hongliang; Wang, Wu

    2017-07-01

    To investigate the potential relationship between perfusion parameters from first-pass dual-input perfusion computed tomography (DI-PCT) and iodine uptake levels estimated from dual-energy CT (DE-CT).The pre-experimental part of this study included a dynamic DE-CT protocol in 15 patients to evaluate peak arterial enhancement of lung cancer based on time-attenuation curves, and the scan time of DE-CT was determined. In the prospective part of the study, 28 lung cancer patients underwent whole-volume perfusion CT and single-source DE-CT using 320-row CT. Pulmonary flow (PF, mL/min/100 mL), aortic flow (AF, mL/min/100 mL), and a perfusion index (PI = PF/[PF + AF]) were automatically generated by in-house commercial software using the dual-input maximum slope method for DI-PCT. For the dual-energy CT data, iodine uptake was estimated by the difference (λ) and the slope (λHU). λ was defined as the difference of CT values between 40 and 70 KeV monochromatic images in lung lesions. λHU was calculated by the following equation: λHU = |λ/(70 - 40)|. The DI-PCT and DE-CT parameters were analyzed by Pearson/Spearman correlation analysis, respectively.All subjects were pathologically proved as lung cancer patients (including 16 squamous cell carcinoma, 8 adenocarcinoma, and 4 small cell lung cancer) by surgery or CT-guided biopsy. Interobserver reproducibility in DI-PCT (PF, AF, PI) and DE-CT (λ, λHU) were relatively good to excellent (intraclass correlation coefficient [ICC]Inter = 0.8726-0.9255, ICCInter = 0.8179-0.8842; ICCInter = 0.8881-0.9177, ICCInter = 0.9820-0.9970, ICCInter = 0.9780-0.9971, respectively). Correlation coefficient between λ and AF, and PF were as follows: 0.589 (P < .01) and 0.383 (P < .05). Correlation coefficient between λHU and AF, and PF were as follows: 0.564 (P < .01) and 0.388 (P < .05).Both the single-source DE-CT and dual-input CT perfusion analysis method can be applied to assess blood supply of lung cancer patients. Preliminary results demonstrated that the iodine uptake relevant parameters derived from DE-CT significantly correlated with perfusion parameters derived from DI-PCT.

  3. Output Control Using Feedforward And Cascade Controllers

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Report presents theoretical study of open-loop control elements in single-input, single-output linear system. Focus on output-control (servomechanism) problem, in which objective is to find control scheme that causes output to track certain command inputs and to reject certain disturbance inputs in steady state. Report closes with brief discussion of characteristics and relative merits of feedforward, cascade, and feedback controllers and combinations thereof.

  4. Single Particle Analysis by Combined Chemical Imaging to Study Episodic Air Pollution Events in Vienna

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Eitenberger, Elisabeth; Friedbacher, Gernot; Brenner, Florian; Hutter, Herbert; Schauer, Gerhard; Kistler, Magdalena; Greilinger, Marion; Lohninger, Hans; Lendl, Bernhard; Kasper-Giebl, Anne

    2017-04-01

    The aerosol composition of a city like Vienna is characterized by a complex interaction of local emissions and atmospheric input on a regional and continental scale. The identification of major aerosol constituents for basic source appointment and air quality issues needs a high analytical effort. Exceptional episodic air pollution events strongly change the typical aerosol composition of a city like Vienna on a time-scale of few hours to several days. Analyzing the chemistry of particulate matter from these events is often hampered by the sampling time and related sample amount necessary to apply the full range of bulk analytical methods needed for chemical characterization. Additionally, morphological and single particle features are hardly accessible. Chemical Imaging evolved to a powerful tool for image-based chemical analysis of complex samples. As a complementary technique to bulk analytical methods, chemical imaging can address a new access to study air pollution events by obtaining major aerosol constituents with single particle features at high temporal resolutions and small sample volumes. The analysis of the chemical imaging datasets is assisted by multivariate statistics with the benefit of image-based chemical structure determination for direct aerosol source appointment. A novel approach in chemical imaging is combined chemical imaging or so-called multisensor hyperspectral imaging, involving elemental imaging (electron microscopy-based energy dispersive X-ray imaging), vibrational imaging (Raman micro-spectroscopy) and mass spectrometric imaging (Time-of-Flight Secondary Ion Mass Spectrometry) with subsequent combined multivariate analytics. Combined chemical imaging of precipitated aerosol particles will be demonstrated by the following examples of air pollution events in Vienna: Exceptional episodic events like the transformation of Saharan dust by the impact of the city of Vienna will be discussed and compared to samples obtained at a high alpine background site (Sonnblick Observatory, Saharan Dust Event from April 2016). Further, chemical imaging of biological aerosol constituents of an autumnal pollen breakout in Vienna, with background samples from nearby locations from November 2016 will demonstrate the advantages of the chemical imaging approach. Additionally, the chemical fingerprint of an exceptional air pollution event from a local emission source, caused by the pull down process of a building in Vienna will unravel the needs for multisensor imaging, especially the combinational access. Obtained chemical images will be correlated to bulk analytical results. Benefits of the overall methodical access by combining bulk analytics and combined chemical imaging of exceptional episodic air pollution events will be discussed.

  5. Black optic display

    DOEpatents

    Veligdan, James T.

    1997-01-01

    An optical display includes a plurality of stacked optical waveguides having first and second opposite ends collectively defining an image input face and an image screen, respectively, with the screen being oblique to the input face. Each of the waveguides includes a transparent core bound by a cladding layer having a lower index of refraction for effecting internal reflection of image light transmitted into the input face to project an image on the screen, with each of the cladding layers including a cladding cap integrally joined thereto at the waveguide second ends. Each of the cores is beveled at the waveguide second end so that the cladding cap is viewable through the transparent core. Each of the cladding caps is black for absorbing external ambient light incident upon the screen for improving contrast of the image projected internally on the screen.

  6. Fuzzy Neuron: Method and Hardware Realization

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2014-01-01

    This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

  7. Development of neutron imaging beamline for NDT applications at Dhruva reactor, India

    NASA Astrophysics Data System (ADS)

    Shukla, Mayank; Roy, Tushar; Kashyap, Yogesh; Shukla, Shefali; Singh, Prashant; Ravi, Baribaddala; Patel, Tarun; Gadkari, S. C.

    2018-05-01

    Thermal neutron imaging techniques such as radiography or tomography are very useful tool for various scientific investigations and industrial applications. Neutron radiography is complementary to X-ray radiography, as neutrons interact with nucleus as compared to X-ray interaction with orbital electrons. We present here design and development of a neutron imaging beamline at 100 MW Dhruva research reactor for neutron imaging applications such as radiography, tomography and phase contrast imaging. Combinations of sapphire and bismuth single crystals have been used as thermal neutron filter/gamma absorber at the input of a specially designed collimator to maximize thermal neutron to gamma ratio. The maximum beam size of neutrons has been restricted to ∼120 mm diameter at the sample position. A cadmium ratio of ∼250 with L / D ratio of 160 and thermal neutron flux of ∼ 4 × 107 n/cm2 s at the sample position has been measured. In this paper, different aspects of the beamline design such as collimator, shielding, sample manipulator, digital imaging system are described. Nondestructive radiography/tomography experiments on hydrogen concentration in Zr-alloy, aluminium foam, ceramic metal seals etc. are also presented.

  8. EEG source imaging during two Qigong meditations.

    PubMed

    Faber, Pascal L; Lehmann, Dietrich; Tei, Shisei; Tsujiuchi, Takuya; Kumano, Hiroaki; Pascual-Marqui, Roberto D; Kochi, Kieko

    2012-08-01

    Experienced Qigong meditators who regularly perform the exercises "Thinking of Nothing" and "Qigong" were studied with multichannel EEG source imaging during their meditations. The intracerebral localization of brain electric activity during the two meditation conditions was compared using sLORETA functional EEG tomography. Differences between conditions were assessed using t statistics (corrected for multiple testing) on the normalized and log-transformed current density values of the sLORETA images. In the EEG alpha-2 frequency, 125 voxels differed significantly; all were more active during "Qigong" than "Thinking of Nothing," forming a single cluster in parietal Brodmann areas 5, 7, 31, and 40, all in the right hemisphere. In the EEG beta-1 frequency, 37 voxels differed significantly; all were more active during "Thinking of Nothing" than "Qigong," forming a single cluster in prefrontal Brodmann areas 6, 8, and 9, all in the left hemisphere. Compared to combined initial-final no-task resting, "Qigong" showed activation in posterior areas whereas "Thinking of Nothing" showed activation in anterior areas. The stronger activity of posterior (right) parietal areas during "Qigong" and anterior (left) prefrontal areas during "Thinking of Nothing" may reflect a predominance of self-reference, attention and input-centered processing in the "Qigong" meditation, and of control-centered processing in the "Thinking of Nothing" meditation.

  9. Computer program for single input-output, single-loop feedback systems

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.

  10. A comparison of ordinary fuzzy and intuitionistic fuzzy approaches in visualizing the image of flat electroencephalography

    NASA Astrophysics Data System (ADS)

    Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora

    2017-09-01

    Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.

  11. A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization

    NASA Astrophysics Data System (ADS)

    Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan

    2011-03-01

    We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.

  12. Converging levels of analysis in the cognitive neuroscience of visual attention.

    PubMed Central

    Duncan, J

    1998-01-01

    Experiments using behavioural, lesion, functional imaging and single neuron methods are considered in the context of a neuropsychological model of visual attention. According to this model, inputs compete for representation in multiple visually responsive brain systems, sensory and motor, cortical and subcortical. Competition is biased by advance priming of neurons responsive to current behavioural targets. Across systems competition is integrated such that the same, selected object tends to become dominant throughout. The behavioural studies reviewed concern divided attention within and between modalities. They implicate within-modality competition as one main restriction on concurrent stimulus identification. In contrast to the conventional association of lateral attentional focus with parietal lobe function, the lesion studies show attentional bias to be a widespread consequence of unilateral cortical damage. Although the clinical syndrome of unilateral neglect may indeed be associated with parietal lesions, this probably reflects an assortment of further deficits accompanying a simple attentional imbalance. The functional imaging studies show joint involvement of lateral prefrontal and occipital cortex in lateral attentional focus and competition. The single unit studies suggest how competition in several regions of extrastriate cortex is biased by advance priming of neurons responsive to current behavioural targets. Together, the concepts of competition, priming and integration allow a unified theoretical approach to findings from behavioural to single neuron levels. PMID:9770224

  13. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning

    PubMed Central

    Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi

    2017-01-01

    Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization. PMID:28786986

  14. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    PubMed

    Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong

    2017-01-01

    Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  15. Ultra-wideband receiver

    DOEpatents

    McEwan, T.E.

    1994-09-06

    An ultra-wideband (UWB) receiver utilizes a strobed input line with a sampler connected to an amplifier. In a differential configuration, [+-] UWB inputs are connected to separate antennas or to two halves of a dipole antenna. The two input lines include samplers which are commonly strobed by a gating pulse with a very low duty cycle. In a single ended configuration, only a single strobed input line and sampler is utilized. The samplers integrate, or average, up to 10,000 pulses to achieve high sensitivity and good rejection of uncorrelated signals. 16 figs.

  16. Ultra-wideband receiver

    DOEpatents

    McEwan, T.E.

    1996-06-04

    An ultra-wideband (UWB) receiver utilizes a strobed input line with a sampler connected to an amplifier. In a differential configuration, {+-}UWB inputs are connected to separate antennas or to two halves of a dipole antenna. The two input lines include samplers which are commonly strobed by a gating pulse with a very low duty cycle. In a single ended configuration, only a single strobed input line and sampler is utilized. The samplers integrate, or average, up to 10,000 pulses to achieve high sensitivity and good rejection of uncorrelated signals. 21 figs.

  17. Input Scanners: A Growing Impact In A Diverse Marketplace

    NASA Astrophysics Data System (ADS)

    Marks, Kevin E.

    1989-08-01

    Just as newly invented photographic processes revolutionized the printing industry at the turn of the century, electronic imaging has affected almost every computer application today. To completely emulate traditionally mechanical means of information handling, computer based systems must be able to capture graphic images. Thus, there is a widespread need for the electronic camera, the digitizer, the input scanner. This paper will review how various types of input scanners are being used in many diverse applications. The following topics will be covered: - Historical overview of input scanners - New applications for scanners - Impact of scanning technology on select markets - Scanning systems issues

  18. Scheme of Optical Image Encryption with Digital Information Input and Dynamic Encryption Key based on Two LC SLMs

    NASA Astrophysics Data System (ADS)

    Bondareva, A. P.; Cheremkhin, P. A.; Evtikhiev, N. N.; Krasnov, V. V.; Starikov, S. N.

    Scheme of optical image encryption with digital information input and dynamic encryption key based on two liquid crystal spatial light modulators and operating with spatially-incoherent monochromatic illumination is experimentally implemented. Results of experiments on images optical encryption and numerical decryption are presented. Satisfactory decryption error of 0.20÷0.27 is achieved.

  19. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  <8% and the calculated CBF values showed a tight correlation (r  =  0.97). The simulation showed that errors associated with the assumed parameters were  <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  20. Opacity annotation of diffuse lung diseases using deep convolutional neural network with multi-channel information

    NASA Astrophysics Data System (ADS)

    Mabu, Shingo; Kido, Shoji; Hashimoto, Noriaki; Hirano, Yasushi; Kuremoto, Takashi

    2018-02-01

    This research proposes a multi-channel deep convolutional neural network (DCNN) for computer-aided diagnosis (CAD) that classifies normal and abnormal opacities of diffuse lung diseases in Computed Tomography (CT) images. Because CT images are gray scale, DCNN usually uses one channel for inputting image data. On the other hand, this research uses multi-channel DCNN where each channel corresponds to the original raw image or the images transformed by some preprocessing techniques. In fact, the information obtained only from raw images is limited and some conventional research suggested that preprocessing of images contributes to improving the classification accuracy. Thus, the combination of the original and preprocessed images is expected to show higher accuracy. The proposed method realizes region of interest (ROI)-based opacity annotation. We used lung CT images taken in Yamaguchi University Hospital, Japan, and they are divided into 32 × 32 ROI images. The ROIs contain six kinds of opacities: consolidation, ground-glass opacity (GGO), emphysema, honeycombing, nodular, and normal. The aim of the proposed method is to classify each ROI into one of the six opacities (classes). The DCNN structure is based on VGG network that secured the first and second places in ImageNet ILSVRC-2014. From the experimental results, the classification accuracy of the proposed method was better than the conventional method with single channel, and there was a significant difference between them.

  1. Identification of single-input-single-output quantum linear systems

    NASA Astrophysics Data System (ADS)

    Levitt, Matthew; GuÅ£ǎ, Mǎdǎlin

    2017-03-01

    The purpose of this paper is to investigate system identification for single-input-single-output general (active or passive) quantum linear systems. For a given input we address the following questions: (1) Which parameters can be identified by measuring the output? (2) How can we construct a system realization from sufficient input-output data? We show that for time-dependent inputs, the systems which cannot be distinguished are related by symplectic transformations acting on the space of system modes. This complements a previous result of Guţă and Yamamoto [IEEE Trans. Autom. Control 61, 921 (2016), 10.1109/TAC.2015.2448491] for passive linear systems. In the regime of stationary quantum noise input, the output is completely determined by the power spectrum. We define the notion of global minimality for a given power spectrum, and characterize globally minimal systems as those with a fully mixed stationary state. We show that in the case of systems with a cascade realization, the power spectrum completely fixes the transfer function, so the system can be identified up to a symplectic transformation. We give a method for constructing a globally minimal subsystem direct from the power spectrum. Restricting to passive systems the analysis simplifies so that identifiability may be completely understood from the eigenvalues of a particular system matrix.

  2. A virtual size-variable pinhole for single photon confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gao, Guangjun; Khoobehi, Bahram

    2013-03-01

    Pinhole is a critical device in single photon confocal microscopy (SPCM) owning to its ability to block the background noise scattered from back and forth of the focal plane. Without pinhole, the sectioning ability of SPCM will be degraded and many background noise signals will occurred together with useful signals, and sometimes these bad noises can submerge the details that we are interested in. However a pinhole with too small diameter will block both background noises and part of signals and decrease the intensity of the image. Therefore in many cases pinhole size should be selected carefully. Unfortunately because of constrains in mechanics, a pinhole that can change its size continuously, for example from 10 μm to 100 μm, is unavailable. For most commercial confocal microscopies, only several discrete pinhole sizes are provided, such as 10 μm, 30 μm, 60 μm etc. Things will be even harder for some imaging systems which use the input interface of a single mode fiber as the pinhole of SPCM, and then the pinhole size of these systems will be fixed, which far limit the optimization of systems' performance. In this paper, we design a size-variable pinhole setup that can offer a virtual pinhole with its diameter adjustable, which includes a physical pinhole (or single mode fiber) and a fine designed zoom relay (ZR) optical system. The magnification ratio of this ZR can vary smoothly while keeping the conjugation distance unchanged. The aberrations of the ZR are well balanced and diffraction-limited image performance are obtained so that the virtual pinhole can block background scattering noise and pass the in-focus signal effectively and accurately. Simulation results are also provided and discussed.

  3. Three-dimensional image display system using stereogram and holographic optical memory techniques

    NASA Astrophysics Data System (ADS)

    Kim, Cheol S.; Kim, Jung G.; Shin, Chang-Mok; Kim, Soo-Joong

    2001-09-01

    In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH (binary phase hologram) and LCD (liquid crystal display) for controlling reference beam. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. The reference beams are acquired by Fourier transform of BPH which designed with SA (simulated annealing) algorithm, and represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. In output plane, we used a LCD shutter that is synchronized to a monitor that displays alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO3 repeatedly using holographic optical memory techniques.

  4. Vector generator scan converter

    DOEpatents

    Moore, J.M.; Leighton, J.F.

    1988-02-05

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.

  5. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  6. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array.

    PubMed

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-03-11

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile's rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm.

  7. Detection of Unilateral Hearing Loss by Stationary Wavelet Entropy.

    PubMed

    Zhang, Yudong; Nayak, Deepak Ranjan; Yang, Ming; Yuan, Ti-Fei; Liu, Bin; Lu, Huimin; Wang, Shuihua

    2017-01-01

    Sensorineural hearing loss is correlated to massive neurological or psychiatric disease. T1-weighted volumetric images were acquired from fourteen subjects with right-sided hearing loss (RHL), fifteen subjects with left-sided hearing loss (LHL), and twenty healthy controls (HC). We treated a three-class classification problem: HC, LHL, and RHL. Stationary wavelet entropy was employed to extract global features from magnetic resonance images of each subject. Those stationary wavelet entropy features were used as input to a single-hidden layer feedforward neuralnetwork classifier. The 10 repetition results of 10-fold cross validation show that the accuracies of HC, LHL, and RHL are 96.94%, 97.14%, and 97.35%, respectively. Our developed system is promising and effective in detecting hearing loss. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Burst-induced anti-Hebbian depression acts through short-term synaptic dynamics to cancel redundant sensory signals.

    PubMed

    Harvey-Girard, Erik; Lewis, John; Maler, Leonard

    2010-04-28

    Weakly electric fish can enhance the detection and localization of important signals such as those of prey in part by cancellation of redundant spatially diffuse electric signals due to, e.g., their tail bending. The cancellation mechanism is based on descending input, conveyed by parallel fibers emanating from cerebellar granule cells, that produces a negative image of the global low-frequency signals in pyramidal cells within the first-order electrosensory region, the electrosensory lateral line lobe (ELL). Here we demonstrate that the parallel fiber synaptic input to ELL pyramidal cell undergoes long-term depression (LTD) whenever both parallel fiber afferents and their target cells are stimulated to produce paired burst discharges. Paired large bursts (4-4) induce robust LTD over pre-post delays of up to +/-50 ms, whereas smaller bursts (2-2) induce weaker LTD. Single spikes (either presynaptic or postsynaptic) paired with bursts did not induce LTD. Tetanic presynaptic stimulation was also ineffective in inducing LTD. Thus, we have demonstrated a form of anti-Hebbian LTD that depends on the temporal correlation of burst discharge. We then demonstrated that the burst-induced LTD is postsynaptic and requires the NR2B subunit of the NMDA receptor, elevation of postsynaptic Ca(2+), and activation of CaMKIIbeta. A model incorporating local inhibitory circuitry and previously identified short-term presynaptic potentiation of the parallel fiber synapses further suggests that the combination of burst-induced LTD, presynaptic potentiation, and local inhibition may be sufficient to explain the generation of the negative image and cancellation of redundant sensory input by ELL pyramidal cells.

  9. Robust Mapping of Incoherent Fiber-Optic Bundles

    NASA Technical Reports Server (NTRS)

    Roberts, Harry E.; Deason, Brent E.; DePlachett, Charles P.; Pilgrim, Robert A.; Sanford, Harold S.

    2007-01-01

    A method and apparatus for mapping between the positions of fibers at opposite ends of incoherent fiber-optic bundles have been invented to enable the use of such bundles to transmit images in visible or infrared light. The method is robust in the sense that it provides useful mapping even for a bundle that contains thousands of narrow, irregularly packed fibers, some of which may be defective. In a coherent fiber-optic bundle, the input and output ends of each fiber lie at identical positions in the input and output planes; therefore, the bundle can be used to transmit images without further modification. Unfortunately, the fabrication of coherent fiber-optic bundles is too labor-intensive and expensive for many applications. An incoherent fiber-optic bundle can be fabricated more easily and at lower cost, but it produces a scrambled image because the position of the end of each fiber in the input plane is generally different from the end of the same fiber in the output plane. However, the image transmitted by an incoherent fiber-optic bundle can be unscrambled (or, from a different perspective, decoded) by digital processing of the output image if the mapping between the input and output fiber-end positions is known. Thus, the present invention enables the use of relatively inexpensive fiber-optic bundles to transmit images.

  10. NeuroSeek dual-color image processing infrared focal plane array

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  11. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE PAGES

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    2017-10-27

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  12. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  13. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  14. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  15. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  16. Hierarchical neural network model of the visual system determining figure/ground relation

    NASA Astrophysics Data System (ADS)

    Kikuchi, Masayuki

    2017-07-01

    One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.

  17. Experimental Assessment and Enhancement of Planar Laser-Induced Fluorescence Measurements of Nitric Oxide in an Inverse Diffusion Flame

    NASA Technical Reports Server (NTRS)

    Partridge, William P.; Laurendeau, Normand M.

    1997-01-01

    We have experimentally assessed the quantitative nature of planar laser-induced fluorescence (PLIF) measurements of NO concentration in a unique atmospheric pressure, laminar, axial inverse diffusion flame (IDF). The PLIF measurements were assessed relative to a two-dimensional array of separate laser saturated fluorescence (LSF) measurements. We demonstrated and evaluated several experimentally-based procedures for enhancing the quantitative nature of PLIF concentration images. Because these experimentally-based PLIF correction schemes require only the ability to make PLIF and LSF measurements, they produce a more broadly applicable PLIF diagnostic compared to numerically-based correction schemes. We experimentally assessed the influence of interferences on both narrow-band and broad-band fluorescence measurements at atmospheric and high pressures. Optimum excitation and detection schemes were determined for the LSF and PLIF measurements. Single-input and multiple-input, experimentally-based PLIF enhancement procedures were developed for application in test environments with both negligible and significant quench-dependent error gradients. Each experimentally-based procedure provides an enhancement of approximately 50% in the quantitative nature of the PLIF measurements, and results in concentration images nominally as quantitative as LSF point measurements. These correction procedures can be applied to other species, including radicals, for which no experimental data are available from which to implement numerically-based PLIF enhancement procedures.

  18. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  19. Double image encryption in Fresnel domain using wavelet transform, gyrator transform and spiral phase masks

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Bhaduri, Basanta

    2017-06-01

    In this paper, we propose a new technique for double image encryption in the Fresnel domain using wavelet transform (WT), gyrator transform (GT) and spiral phase masks (SPMs). The two input mages are first phase encoded and each of them are then multiplied with SPMs and Fresnel propagated with distances d1 and d2, respectively. The single-level discrete WT is applied to Fresnel propagated complex images to decompose each into sub-band matrices i.e. LL, HL, LH and HH. Further, the sub-band matrices of two complex images are interchanged after modulation with random phase masks (RPMs) and subjected to inverse discrete WT. The resulting images are then both added and subtracted to get intermediate images which are further Fresnel propagated with distances d3 and d4, respectively. These outputs are finally gyrator transformed with the same angle α to get the encrypted images. The proposed technique provides enhanced security in terms of a large set of security keys. The sensitivity of security keys such as SPM parameters, GT angle α, Fresnel propagation distances are investigated. The robustness of the proposed techniques against noise and occlusion attacks are also analysed. The numerical simulation results are shown in support of the validity and effectiveness of the proposed technique.

  20. Volume estimation using food specific shape templates in mobile image-based dietary assessment

    NASA Astrophysics Data System (ADS)

    Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.

    2011-03-01

    As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.

  1. Secret shared multiple-image encryption based on row scanning compressive ghost imaging and phase retrieval in the Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2017-09-01

    A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.

  2. Design of high energy laser pulse delivery in a multimode fiber for photoacoustic tomography.

    PubMed

    Ai, Min; Shu, Weihang; Salcudean, Tim; Rohling, Robert; Abolmaesumi, Purang; Tang, Shuo

    2017-07-24

    In photoacoustic tomography (PAT), delivering high energy pulses through optical fiber is critical for achieving high quality imaging. A fiber coupling scheme with a beam homogenizer is demonstrated for coupling high energy pulses in a single multimode fiber. This scheme can benefit PAT applications that require miniaturized illumination or internal illumination with a small fiber. The beam homogenizer is achieved by using a cross cylindrical lens array, which provides a periodic spatial modulation on the phase of the input light. Thus the lens array acts as a phase grating which diffracts the beam into a 2D diffraction pattern. Both theoretical analysis and experiments demonstrate that the focused beam can be split into a 2D spot array that can reduce the peak power on the fiber tip surface and thus enhance the coupling performance. The theoretical analysis of the intensity distribution of the focused beam is carried out by Fourier optics. In experiments, coupled energy at 48 mJ/pulse and 60 mJ/pulse have been achieved and the corresponding coupling efficiency is 70% and 90% in a 1000-μm and a 1500-μm-core-diameter fiber, respectively. The high energy pulses delivered by the multimode fiber are further tested for PAT imaging in phantoms. PAT imaging of a printed dot array shows a large illumination area of 7 cm 2 under 5 mm thick chicken breast tissue. In vivo imaging is also demonstrated on the human forearm. The large improvement in coupling energy can potentially benefit PAT with single fiber delivery to achieve large area imaging and deep penetration detection.

  3. Estimating atmospheric parameters and reducing noise for multispectral imaging

    DOEpatents

    Conger, James Lynn

    2014-02-25

    A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.

  4. Synaptic plasticity in a cerebellum-like structure depends on temporal order

    NASA Astrophysics Data System (ADS)

    Bell, Curtis C.; Han, Victor Z.; Sugawara, Yoshiko; Grant, Kirsty

    1997-05-01

    Cerebellum-like structures in fish appear to act as adaptive sensory processors, in which learned predictions about sensory input are generated and subtracted from actual sensory input, allowing unpredicted inputs to stand out1-3. Pairing sensory input with centrally originating predictive signals, such as corollary discharge signals linked to motor commands, results in neural responses to the predictive signals alone that are Negative images' of the previously paired sensory responses. Adding these 'negative images' to actual sensory inputs minimizes the neural response to predictable sensory features. At the cellular level, sensory input is relayed to the basal region of Purkinje-like cells, whereas predictive signals are relayed by parallel fibres to the apical dendrites of the same cells4. The generation of negative images could be explained by plasticity at parallel fibre synapses5-7. We show here that such plasticity exists in the electrosensory lobe of mormyrid electric fish and that it has the necessary properties for such a model: it is reversible, anti-hebbian (excitatory postsynaptic potentials (EPSPs) are depressed after pairing with a postsynaptic spike) and tightly dependent on the sequence of pre- and postsynaptic events, with depression occurring only if the postsynaptic spike follows EPSP onset within 60 ms.

  5. Convergence of Cortical and Sensory Driver Inputs on Single Thalamocortical Cells

    PubMed Central

    Groh, Alexander; Bokor, Hajnalka; Mease, Rebecca A.; Plattner, Viktor M.; Hangya, Balázs; Stroh, Albrecht; Deschenes, Martin; Acsády, László

    2014-01-01

    Ascending and descending information is relayed through the thalamus via strong, “driver” pathways. According to our current knowledge, different driver pathways are organized in parallel streams and do not interact at the thalamic level. Using an electron microscopic approach combined with optogenetics and in vivo physiology, we examined whether driver inputs arising from different sources can interact at single thalamocortical cells in the rodent somatosensory thalamus (nucleus posterior, POm). Both the anatomical and the physiological data demonstrated that ascending driver inputs from the brainstem and descending driver inputs from cortical layer 5 pyramidal neurons converge and interact on single thalamocortical neurons in POm. Both individual pathways displayed driver properties, but they interacted synergistically in a time-dependent manner and when co-activated, supralinearly increased the output of thalamus. As a consequence, thalamocortical neurons reported the relative timing between sensory events and ongoing cortical activity. We conclude that thalamocortical neurons can receive 2 powerful inputs of different origin, rather than only a single one as previously suggested. This allows thalamocortical neurons to integrate raw sensory information with powerful cortical signals and transfer the integrated activity back to cortical networks. PMID:23825316

  6. Autonomous system for Web-based microarray image analysis.

    PubMed

    Bozinov, Daniel

    2003-12-01

    Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.

  7. Monitoring of sludge dewatering equipment by image classification

    NASA Astrophysics Data System (ADS)

    Maquine de Souza, Sandro; Grandvalet, Yves; Denoeux, Thierry

    2004-11-01

    Belt filter presses represent an economical means to dewater the residual sludge generated in wastewater treatment plants. In order to assure maximal water removal, the raw sludge is mixed with a chemical conditioner prior to being fed into the belt filter press. When the conditioner is properly dosed, the sludge acquires a coarse texture, with space between flocs. This information was exploited for the development of a software sensor, where digital images are the input signal, and the output is a numeric value proportional to the dewatered sludge dry content. Three families of features were used to characterize the textures. Gabor filtering, wavelet decomposition and co-occurrence matrix computation were the techniques used. A database of images, ordered by their corresponding dry contents, was used to calibrate the model that calculates the sensor output. The images were separated in groups that correspond to single experimental sessions. With the calibrated model, all images were correctly ranked within an experiment session. The results were very similar regardless of the family of features used. The output can be fed to a control system, or, in the case of fixed experiment conditions, it can be used to directly estimate the dewatered sludge dry content.

  8. DSP+FPGA-based real-time histogram equalization system of infrared image

    NASA Astrophysics Data System (ADS)

    Gu, Dongsheng; Yang, Nansheng; Pi, Defu; Hua, Min; Shen, Xiaoyan; Zhang, Ruolan

    2001-10-01

    Histogram Modification is a simple but effective method to enhance an infrared image. There are several methods to equalize an infrared image's histogram due to the different characteristics of the different infrared images, such as the traditional HE (Histogram Equalization) method, and the improved HP (Histogram Projection) and PE (Plateau Equalization) method and so on. If to realize these methods in a single system, the system must have a mass of memory and extremely fast speed. In our system, we introduce a DSP + FPGA based real-time procession technology to do these things together. FPGA is used to realize the common part of these methods while DSP is to do the different part. The choice of methods and the parameter can be input by a keyboard or a computer. By this means, the function of the system is powerful while it is easy to operate and maintain. In this article, we give out the diagram of the system and the soft flow chart of the methods. And at the end of it, we give out the infrared image and its histogram before and after the process of HE method.

  9. Chirped or time modulated excitation compared to short pulses for photoacoustic imaging in acoustic attenuating media

    NASA Astrophysics Data System (ADS)

    Burgholzer, P.; Motz, C.; Lang, O.; Berer, T.; Huemer, M.

    2018-02-01

    In photoacoustic imaging, optically generated acoustic waves transport the information about embedded structures to the sample surface. Usually, short laser pulses are used for the acoustic excitation. Acoustic attenuation increases for higher frequencies, which reduces the bandwidth and limits the spatial resolution. One could think of more efficient waveforms than single short pulses, such as pseudo noise codes, chirped, or harmonic excitation, which could enable a higher information-transfer from the samples interior to its surface by acoustic waves. We used a linear state space model to discretize the wave equation, such as the Stoke's equation, but this method could be used for any other linear wave equation. Linear estimators and a non-linear function inversion were applied to the measured surface data, for onedimensional image reconstruction. The proposed estimation method allows optimizing the temporal modulation of the excitation laser such that the accuracy and spatial resolution of the reconstructed image is maximized. We have restricted ourselves to one-dimensional models, as for higher dimensions the one-dimensional reconstruction, which corresponds to the acoustic wave without attenuation, can be used as input for any ultrasound imaging method, such as back-projection or time-reversal method.

  10. Coded-Aperture X- or gamma -ray telescope with Least- squares image reconstruction. III. Data acquisition and analysis enhancements

    NASA Astrophysics Data System (ADS)

    Kohman, T. P.

    1995-05-01

    The design of a cosmic X- or gamma -ray telescope with least- squares image reconstruction and its simulated operation have been described (Rev. Sci. Instrum. 60, 3396 and 3410 (1989)). Use of an auxiliary open aperture ("limiter") ahead of the coded aperture limits the object field to fewer pixels than detector elements, permitting least-squares reconstruction with improved accuracy in the imaged field; it also yields a uniformly sensitive ("flat") central field. The design has been enhanced to provide for mask-antimask operation. This cancels and eliminates uncertainties in the detector background, and the simulated results have virtually the same statistical accuracy (pixel-by-pixel output-input RMSD) as with a single mask alone. The simulations have been made more realistic by incorporating instrumental blurring of sources. A second-stage least-squares procedure had been developed to determine the precise positions and total fluxes of point sources responsible for clusters of above-background pixels in the field resulting from the first-stage reconstruction. Another program converts source positions in the image plane to celestial coordinates and vice versa, the image being a gnomic projection of a region of the sky.

  11. Automatic Boosted Flood Mapping from Satellite Data

    NASA Technical Reports Server (NTRS)

    Coltin, Brian; McMichael, Scott; Smith, Trey; Fong, Terrence

    2016-01-01

    Numerous algorithms have been proposed to map floods from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, most require human input to succeed, either to specify a threshold value or to manually annotate training data. We introduce a new algorithm based on Adaboost which effectively maps floods without any human input, allowing for a truly rapid and automatic response. The Adaboost algorithm combines multiple thresholds to achieve results comparable to state-of-the-art algorithms which do require human input. We evaluate Adaboost, as well as numerous previously proposed flood mapping algorithms, on multiple MODIS flood images, as well as on hundreds of non-flood MODIS lake images, demonstrating its effectiveness across a wide variety of conditions.

  12. Electronic cleansing for CT colonography using spectral-driven iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Nasirudin, Radin A.; Näppi, Janne J.; Hironaka, Toru; Tachibana, Rie; Yoshida, Hiroyuki

    2017-03-01

    Dual-energy computed tomography is used increasingly in CT colonography (CTC). The combination of computer-aided detection (CADe) and dual-energy CTC (DE-CTC) has high clinical value, because it can detect clinically significant colonic lesions automatically at higher accuracy than does conventional single-energy CTC. While CADe has demonstrated its ability to detect small polyps, its performance is highly dependent on several factors, including the quality of CTC images and electronic cleansing (EC) of the images. The presence of artifacts such as beam hardening and image noise in ultra-low-dose CTC can produce incorrectly cleansed colon images that severely degrade the detection performance of CTC for small polyps. Also, CADe methods are very dependent on the quality of input images and the information about different tissues in the colon. In this work, we developed a novel method to calculate EC images using spectral information from DE-CTC data. First, the ultra-low dose dual-energy projection data obtained from a CT scanner are decomposed into two materials, soft tissue and the orally administered fecal-tagging contrast agent, to detect the location and intensity of the contrast agent. Next, the images are iteratively reconstructed while gradually removing the presence of tagged materials from the images. Our preliminary qualitative results show that the method can cleanse the contrast agent and tagged materials correctly from DE-CTC images without affecting the appearance of surrounding tissue.

  13. Simulated lumped-parameter system reduced-order adaptive control studies

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.

    1981-01-01

    Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.

  14. A comparative study on generating simulated Landsat NDVI images using data fusion and regression method-the case of the Korean Peninsula.

    PubMed

    Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee

    2017-07-01

    Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.

  15. Real-time edge-enhanced optical correlator

    NASA Technical Reports Server (NTRS)

    Liu, Tsuen-Hsi (Inventor); Cheng, Li-Jen (Inventor)

    1992-01-01

    Edge enhancement of an input image by four-wave mixing a first write beam with a second write beam in a photorefractive crystal, GaAs, was achieved for VanderLugt optical correlation with an edge enhanced reference image by optimizing the power ratio of a second write beam to the first write beam (70:1) and optimizing the power ratio of a read beam, which carries the reference image to the first write beam (100:701). Liquid crystal TV panels are employed as spatial light modulators to change the input and reference images in real time.

  16. Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications

    NASA Astrophysics Data System (ADS)

    Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.

    2015-06-01

    We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non-destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including: the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half-maximum (FWHM) across the entire dynamic range, and a noise floor about 20 keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.

  17. Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications

    PubMed Central

    Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.

    2014-01-01

    We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications. PMID:25937684

  18. Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications.

    PubMed

    Barber, W C; Wessel, J C; Nygard, E; Iwanczyk, J S

    2015-06-01

    We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.

  19. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  20. L1 Adaptive Control Augmentation System with Application to the X-29 Lateral/Directional Dynamics: A Multi-Input Multi-Output Approach

    NASA Technical Reports Server (NTRS)

    Griffin, Brian Joseph; Burken, John J.; Xargay, Enric

    2010-01-01

    This paper presents an L(sub 1) adaptive control augmentation system design for multi-input multi-output nonlinear systems in the presence of unmatched uncertainties which may exhibit significant cross-coupling effects. A piecewise continuous adaptive law is adopted and extended for applicability to multi-input multi-output systems that explicitly compensates for dynamic cross-coupling. In addition, explicit use of high-fidelity actuator models are added to the L1 architecture to reduce uncertainties in the system. The L(sub 1) multi-input multi-output adaptive control architecture is applied to the X-29 lateral/directional dynamics and results are evaluated against a similar single-input single-output design approach.

  1. Quantum theory of multiple-input-multiple-output Markovian feedback with diffusive measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chia, A.; Wiseman, H. M.

    2011-07-15

    Feedback control engineers have been interested in multiple-input-multiple-output (MIMO) extensions of single-input-single-output (SISO) results of various kinds due to its rich mathematical structure and practical applications. An outstanding problem in quantum feedback control is the extension of the SISO theory of Markovian feedback by Wiseman and Milburn [Phys. Rev. Lett. 70, 548 (1993)] to multiple inputs and multiple outputs. Here we generalize the SISO homodyne-mediated feedback theory to allow for multiple inputs, multiple outputs, and arbitrary diffusive quantum measurements. We thus obtain a MIMO framework which resembles the SISO theory and whose additional mathematical structure is highlighted by the extensivemore » use of vector-operator algebra.« less

  2. A micromachined silicon parallel acoustic delay line (PADL) array for real-time photoacoustic tomography (PAT)

    NASA Astrophysics Data System (ADS)

    Cho, Young Y.; Chang, Cheng-Chung; Wang, Lihong V.; Zou, Jun

    2015-03-01

    To achieve real-time photoacoustic tomography (PAT), massive transducer arrays and data acquisition (DAQ) electronics are needed to receive the PA signals simultaneously, which results in complex and high-cost ultrasound receiver systems. To address this issue, we have developed a new PA data acquisition approach using acoustic time delay. Optical fibers were used as parallel acoustic delay lines (PADLs) to create different time delays in multiple channels of PA signals. This makes the PA signals reach a single-element transducer at different times. As a result, they can be properly received by single-channel DAQ electronics. However, due to their small diameter and fragility, using optical fiber as acoustic delay lines poses a number of challenges in the design, construction and packaging of the PADLs, thereby limiting their performances and use in real imaging applications. In this paper, we report the development of new silicon PADLs, which are directly made from silicon wafers using advanced micromachining technologies. The silicon PADLs have very low acoustic attenuation and distortion. A linear array of 16 silicon PADLs were assembled into a handheld package with one common input port and one common output port. To demonstrate its real-time PAT capability, the silicon PADL array (with its output port interfaced with a single-element transducer) was used to receive 16 channels of PA signals simultaneously from a tissue-mimicking optical phantom sample. The reconstructed PA image matches well with the imaging target. Therefore, the silicon PADL array can provide a 16× reduction in the ultrasound DAQ channels for real-time PAT.

  3. Four-dimensional diffusion-weighted MR imaging (4D-DWI): a feasibility study.

    PubMed

    Liu, Yilin; Zhong, Xiaodong; Czito, Brian G; Palta, Manisha; Bashir, Mustafa R; Dale, Brian M; Yin, Fang-Fang; Cai, Jing

    2017-02-01

    Diffusion-weighted Magnetic Resonance Imaging (DWI) has been shown to be a powerful tool for cancer detection with high tumor-to-tissue contrast. This study aims to investigate the feasibility of developing a four-dimensional DWI technique (4D-DWI) for imaging respiratory motion for radiation therapy applications. Image acquisition was performed by repeatedly imaging a volume of interest (VOI) using an interleaved multislice single-shot echo-planar imaging (EPI) 2D-DWI sequence in the axial plane. Each 2D-DWI image was acquired with an intermediately low b-value (b = 500 s/mm 2 ) and with diffusion-encoding gradients in x, y, and z diffusion directions. Respiratory motion was simultaneously recorded using a respiratory bellow, and the synchronized respiratory signal was used to retrospectively sort the 2D images to generate 4D-DWI. Cine MRI using steady-state free precession was also acquired as a motion reference. As a preliminary feasibility study, this technique was implemented on a 4D digital human phantom (XCAT) with a simulated pancreas tumor. The respiratory motion of the phantom was controlled by regular sinusoidal motion profile. 4D-DWI tumor motion trajectories were extracted and compared with the input breathing curve. The mean absolute amplitude differences (D) were calculated in superior-inferior (SI) direction and anterior-posterior (AP) direction. The technique was then evaluated on two healthy volunteers. Finally, the effects of 4D-DWI on apparent diffusion coefficient (ADC) measurements were investigated for hypothetical heterogeneous tumors via simulations. Tumor trajectories extracted from XCAT 4D-DWI were consistent with the input signal: the average D value was 1.9 mm (SI) and 0.4 mm (AP). The average D value was 2.6 mm (SI) and 1.7 mm (AP) for the two healthy volunteers. A 4D-DWI technique has been developed and evaluated on digital phantom and human subjects. 4D-DWI can lead to more accurate respiratory motion measurement. This has a great potential to improve the visualization and delineation of cancer tumors for radiotherapy. © 2016 American Association of Physicists in Medicine.

  4. Multi-party quantum summation without a trusted third party based on single particles

    NASA Astrophysics Data System (ADS)

    Zhang, Cai; Situ, Haozhen; Huang, Qiong; Yang, Pingle

    We propose multi-party quantum summation protocols based on single particles, in which participants are allowed to compute the summation of their inputs without the help of a trusted third party and preserve the privacy of their inputs. Only one participant who generates the source particles needs to perform unitary operations and only single particles are needed in the beginning of the protocols.

  5. Learning transitive verbs from single-word verbs in the input by young children acquiring English.

    PubMed

    Ninio, Anat

    2016-09-01

    The environmental context of verbs addressed by adults to young children is claimed to be uninformative regarding the verbs' meaning, yielding the Syntactic Bootstrapping Hypothesis that, for verb learning, full sentences are needed to demonstrate the semantic arguments of verbs. However, reanalysis of Gleitman's (1990) original data regarding input to a blind child revealed the context of single-word parental verbs to be more transparent than that of sentences. We tested the hypothesis that English-speaking children learn their early verbs from parents' single-word utterances. Distribution of single-word transitive verbs produced by a large sample of young children was strongly predicted by the relative token frequency of verbs in parental single-word utterances, but multiword sentences had no predictive value. Analysis of the interactive context showed that objects of verbs are retrievable by pragmatic inference, as is the meaning of the verbs. Single-word input appears optimal for learning an initial vocabulary of verbs.

  6. Iterative pixelwise approach applied to computer-generated holograms and diffractive optical elements.

    PubMed

    Hsu, Wei-Feng; Lin, Shih-Chih

    2018-01-01

    This paper presents a novel approach to optimizing the design of phase-only computer-generated holograms (CGH) for the creation of binary images in an optical Fourier transform system. Optimization begins by selecting an image pixel with a temporal change in amplitude. The modulated image function undergoes an inverse Fourier transform followed by the imposition of a CGH constraint and the Fourier transform to yield an image function associated with the change in amplitude of the selected pixel. In iterations where the quality of the image is improved, that image function is adopted as the input for the next iteration. In cases where the image quality is not improved, the image function before the pixel changed is used as the input. Thus, the proposed approach is referred to as the pixelwise hybrid input-output (PHIO) algorithm. The PHIO algorithm was shown to achieve image quality far exceeding that of the Gerchberg-Saxton (GS) algorithm. The benefits were particularly evident when the PHIO algorithm was equipped with a dynamic range of image intensities equivalent to the amplitude freedom of the image signal. The signal variation of images reconstructed from the GS algorithm was 1.0223, but only 0.2537 when using PHIO, i.e., a 75% improvement. Nonetheless, the proposed scheme resulted in a 10% degradation in diffraction efficiency and signal-to-noise ratio.

  7. Optically triggering spatiotemporally confined GPCR activity in a cell and programming neurite initiation and extension

    PubMed Central

    Karunarathne, W. K. Ajith; Giri, Lopamudra; Kalyanaraman, Vani; Gautam, N.

    2013-01-01

    G-protein–coupled receptor (GPCR) activity gradients evoke important cell behavior but there is a dearth of methods to induce such asymmetric signaling in a cell. Here we achieved reversible, rapidly switchable patterns of spatiotemporally restricted GPCR activity in a single cell. We recruited properties of nonrhodopsin opsins—rapid deactivation, distinct spectral tuning, and resistance to bleaching—to activate native Gi, Gq, or Gs signaling in selected regions of a cell. Optical inputs were designed to spatiotemporally control levels of second messengers, IP3, phosphatidylinositol (3,4,5)-triphosphate, and cAMP in a cell. Spectrally selective imaging was accomplished to simultaneously monitor optically evoked molecular and cellular response dynamics. We show that localized optical activation of an opsin-based trigger can induce neurite initiation, phosphatidylinositol (3,4,5)-triphosphate increase, and actin remodeling. Serial optical inputs to neurite tips can refashion early neuron differentiation. Methods here can be widely applied to program GPCR-mediated cell behaviors. PMID:23479634

  8. Mapping the Daily Progression of Large Wildland Fires Using MODIS Active Fire Data

    NASA Technical Reports Server (NTRS)

    Veraverbeke, Sander; Sedano, Fernando; Hook, Simon J.; Randerson, James T.; Jin, Yufang; Rogers, Brendan

    2013-01-01

    High temporal resolution information on burned area is a prerequisite for incorporating bottom-up estimates of wildland fire emissions in regional air transport models and for improving models of fire behavior. We used the Moderate Resolution Imaging Spectroradiometer (MODIS) active fire product (MO(Y)D14) as input to a kriging interpolation to derive continuous maps of the evolution of nine large wildland fires. For each fire, local input parameters for the kriging model were defined using variogram analysis. The accuracy of the kriging model was assessed using high resolution daily fire perimeter data available from the U.S. Forest Service. We also assessed the temporal reporting accuracy of the MODIS burned area products (MCD45A1 and MCD64A1). Averaged over the nine fires, the kriging method correctly mapped 73% of the pixels within the accuracy of a single day, compared to 33% for MCD45A1 and 53% for MCD64A1.

  9. Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite

    NASA Astrophysics Data System (ADS)

    Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi

    2018-05-01

    LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.

  10. Peak-Seeking Control Using Gradient and Hessian Estimates

    NASA Technical Reports Server (NTRS)

    Ryan, John J.; Speyer, Jason L.

    2010-01-01

    A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.

  11. Modal control of an oblique wing aircraft

    NASA Technical Reports Server (NTRS)

    Phillips, James D.

    1989-01-01

    A linear modal control algorithm is applied to the NASA Oblique Wing Research Aircraft (OWRA). The control law is evaluated using a detailed nonlinear flight simulation. It is shown that the modal control law attenuates the coupling and nonlinear aerodynamics of the oblique wing and remains stable during control saturation caused by large command inputs or large external disturbances. The technique controls each natural mode independently allowing single-input/single-output techniques to be applied to multiple-input/multiple-output systems.

  12. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  13. Detection and segmentation of multiple touching product inspection items

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David

    1996-12-01

    X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.

  14. Cryptosystem for Securing Image Encryption Using Structured Phase Masks in Fresnel Wavelet Transform Domain

    NASA Astrophysics Data System (ADS)

    Singh, Hukum

    2016-12-01

    A cryptosystem for securing image encryption is considered by using double random phase encoding in Fresnel wavelet transform (FWT) domain. Random phase masks (RPMs) and structured phase masks (SPMs) based on devil's vortex toroidal lens (DVTL) are used in spatial as well as in Fourier planes. The images to be encrypted are first Fresnel transformed and then single-level discrete wavelet transform (DWT) is apply to decompose LL,HL, LH and HH matrices. The resulting matrices from the DWT are multiplied by additional RPMs and the resultants are subjected to inverse DWT for the encrypted images. The scheme is more secure because of many parameters used in the construction of SPM. The original images are recovered by using the correct parameters of FWT and SPM. Phase mask SPM based on DVTL increases security that enlarges the key space for encryption and decryption. The proposed encryption scheme is a lens-less optical system and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The computed value of mean-squared-error between the retrieved and the input images shows the efficacy of scheme. The sensitivity to encryption parameters, robustness against occlusion, entropy and multiplicative Gaussian noise attacks have been analysed.

  15. TDC-based readout electronics for real-time acquisition of high resolution PET bio-images

    NASA Astrophysics Data System (ADS)

    Marino, N.; Saponara, S.; Ambrosi, G.; Baronti, F.; Bisogni, M. G.; Cerello, P.,; Ciciriello, F.; Corsi, F.; Fanucci, L.; Ionica, M.; Licciulli, F.; Marzocca, C.; Morrocchi, M.; Pennazio, F.; Roncella, R.; Santoni, C.; Wheadon, R.; Del Guerra, A.

    2013-02-01

    Positron emission tomography (PET) is a clinical and research tool for in vivo metabolic imaging. The demand for better image quality entails continuous research to improve PET instrumentation. In clinical applications, PET image quality benefits from the time of flight (TOF) feature. Indeed, by measuring the photons arrival time on the detectors with a resolution less than 100 ps, the annihilation point can be estimated with centimeter resolution. This leads to better noise level, contrast and clarity of detail in the images either using analytical or iterative reconstruction algorithms. This work discusses a silicon photomultiplier (SiPM)-based magnetic-field compatible TOF-PET module with depth of interaction (DOI) correction. The detector features a 3D architecture with two tiles of SiPMs coupled to a single LYSO scintillator on both its faces. The real-time front-end electronics is based on a current-mode ASIC where a low input impedance, fast current buffer allows achieving the required time resolution. A pipelined time to digital converter (TDC) measures and digitizes the arrival time and the energy of the events with a timestamp of 100 ps and 400 ps, respectively. An FPGA clusters the data and evaluates the DOI, with a simulated z resolution of the PET image of 1.4 mm FWHM.

  16. How bees distinguish patterns by green and blue modulation

    PubMed Central

    Horridge, Adrian

    2015-01-01

    In the 1920s, Mathilde Hertz found that trained bees discriminated between shapes or patterns of similar size by something related to total length of contrasting contours. This input is now interpreted as modulation in green and blue receptor channels as flying bees scan in the horizontal plane. Modulation is defined as total contrast irrespective of sign multiplied by length of edge displaying that contrast, projected to vertical, therefore, combining structure and contrast in a single input. Contrast is outside the eye; modulation is a phasic response in receptor pathways inside. In recent experiments, bees trained to distinguish color detected, located, and measured three independent inputs and the angles between them. They are the tonic response of the blue receptor pathway and modulation of small-field green or (less preferred) blue receptor pathways. Green and blue channels interacted intimately at a peripheral level. This study explores in more detail how various patterns are discriminated by these cues. The direction of contrast at a boundary was not detected. Instead, bees located and measured total modulation generated by horizontal scanning of contrasts, irrespective of pattern. They also located the positions of isolated vertical edges relative to other landmarks and distinguished the angular widths between vertical edges by green or blue modulation alone. The preferred inputs were the strongest green modulation signal and angular width between outside edges, irrespective of color. In the absence of green modulation, the remaining cue was a measure and location of blue modulation at edges. In the presence of green modulation, blue modulation was inhibited. Black/white patterns were distinguished by the same inputs in blue and green receptor channels. Left–right polarity and mirror images could be discriminated by retinotopic green modulation alone. Colors in areas bounded by strong green contrast were distinguished as more or less blue than the background. The blue content could also be summed over the whole target. There were no achromatic patterns for bees and no evidence that they detected black, white, or gray levels apart from the differences in blue content or modulation at edges. Most of these cues would be sensitive to background color but some were influenced by changes in illumination. The bees usually learned only to avoid the unrewarded target. Exactly the same preferences of the same inputs were used in the detection of single targets as in discrimination between two targets. PMID:28539796

  17. Constructing spherical panoramas of a bladder phantom from endoscopic video using bundle adjustment

    NASA Astrophysics Data System (ADS)

    Soper, Timothy D.; Chandler, John E.; Porter, Michael P.; Seibel, Eric J.

    2011-03-01

    The high recurrence rate of bladder cancer requires patients to undergo frequent surveillance screenings over their lifetime following initial diagnosis and resection. Our laboratory is developing panoramic stitching software that would compile several minutes of cystoscopic video into a single panoramic image, covering the entire bladder, for review by an urolgist at a later time or remote location. Global alignment of video frames is achieved by using a bundle adjuster that simultaneously recovers both the 3D structure of the bladder as well as the scope motion using only the video frames as input. The result of the algorithm is a complete 360° spherical panorama of the outer surface. The details of the software algorithms are presented here along with results from both a virtual cystoscopy as well from real endoscopic imaging of a bladder phantom. The software successfully stitched several hundred video frames into a single panoramic with subpixel accuracy and with no knowledge of the intrinsic camera properties, such as focal length and radial distortion. In the discussion, we outline future work in development of the software as well as identifying factors pertinent to clinical translation of this technology.

  18. Browsing Through Closed Books: Evaluation of Preprocessing Methods for Page Extraction of a 3-D CT Book Volume

    NASA Astrophysics Data System (ADS)

    Stromer, D.; Christlein, V.; Schön, T.; Holub, W.; Maier, A.

    2017-09-01

    It is often the case that a document can not be opened, page-turned or touched anymore due to damages caused by aging processes, moisture or fire. To counter this, special imaging systems can be used. One of our earlier work revealed that a common 3-D X-ray micro-CT scanner is well suited for imaging and reconstructing historical documents written with iron gall ink - an ink consisting of metallic particles. We acquired a volume of a self-made book without opening or page-turning with a single 3-D scan. However, when investigating the reconstructed volume, we faced the problem of a proper automatic extraction of single pages within the volume in an acceptable time without losing information of the writings. Within this work, we evaluate different appropriate pre-processing methods with respect to computation time and accuracy which are decisive for a proper extraction of book pages from the reconstructed X-ray volume and the subsequent ink identification. The different methods were tested for an extreme case with low resolution, noisy input data and wavy pages. Finally, we present results of the page extraction after applying the evaluated methods.

  19. 3D Convolutional Neural Network for Automatic Detection of Lung Nodules in Chest CT.

    PubMed

    Hamidian, Sardar; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2017-01-01

    Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN.

  20. 3D convolutional neural network for automatic detection of lung nodules in chest CT

    NASA Astrophysics Data System (ADS)

    Hamidian, Sardar; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2017-03-01

    Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN.

  1. Frequency conversion of structured light.

    PubMed

    Steinlechner, Fabian; Hermosa, Nathaniel; Pruneri, Valerio; Torres, Juan P

    2016-02-15

    Coherent frequency conversion of structured light, i.e. the ability to manipulate the carrier frequency of a wave front without distorting its spatial phase and intensity profile, provides the opportunity for numerous novel applications in photonic technology and fundamental science. In particular, frequency conversion of spatial modes carrying orbital angular momentum can be exploited in sub-wavelength resolution nano-optics and coherent imaging at a wavelength different from that used to illuminate an object. Moreover, coherent frequency conversion will be crucial for interfacing information stored in the high-dimensional spatial structure of single and entangled photons with various constituents of quantum networks. In this work, we demonstrate frequency conversion of structured light from the near infrared (803 nm) to the visible (527 nm). The conversion scheme is based on sum-frequency generation in a periodically poled lithium niobate crystal pumped with a 1540-nm Gaussian beam. We observe frequency-converted fields that exhibit a high degree of similarity with the input field and verify the coherence of the frequency-conversion process via mode projection measurements with a phase mask and a single-mode fiber. Our results demonstrate the suitability of exploiting the technique for applications in quantum information processing and coherent imaging.

  2. Liouville master equation for multi-electron dynamics during ion-surface interactions

    NASA Astrophysics Data System (ADS)

    Wirtz, L.; Reinhold, C. O.; Lemell, C.; Burgdorfer, J.

    2003-05-01

    We present a simulation of the neutralization of highly charged ions in front of a LiF(100) surface including the close-collision regime above the surface. Our approach employs a Monte-Carlo solution of the Liouville master equation for the joint probability density of the ionic motion and the electronic population of the projectile and the target surface. It includes single as well as double particle-hole (de)excitation processes and incorporates electron correlation effects through the conditional dynamics of population strings. The input in terms of elementary one- and two-electron transfer rates is determined from CTMC calculations as well as quantum mechanical Auger calculations. For slow projectiles and normal incidence, the ionic motion depends sensitively on the interplay between image acceleration towards the surface and repulsion by an ensemble of positive hole charges in the surface (``trampoline effect"). For Ne10+ ions we find that image acceleration dominates and no collective backscattering high above the surface takes place. For grazing incidence, our simulation delineates the pathways to complete neutralization. In accordance with recent experimental observations, most ions are reflected as neutrals or even as singly charged negative particles, irrespective of the charge state of the incoming ion.

  3. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor

    PubMed Central

    Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung

    2018-01-01

    In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447

  4. SPM analysis of parametric (R)-[11C]PK11195 binding images: plasma input versus reference tissue parametric methods.

    PubMed

    Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald

    2007-05-01

    (R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).

  5. Phase and amplitude beam shaping with two deformable mirrors implementing input plane and Fourier plane phase modifications.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Rzasa, John R; Paulson, Daniel A; Davis, Christopher C

    2018-03-20

    We find that ideas in optical image encryption can be very useful for adaptive optics in achieving simultaneous phase and amplitude shaping of a laser beam. An adaptive optics system with simultaneous phase and amplitude shaping ability is very desirable for atmospheric turbulence compensation. Atmospheric turbulence-induced beam distortions can jeopardize the effectiveness of optical power delivery for directed-energy systems and optical information delivery for free-space optical communication systems. In this paper, a prototype adaptive optics system is proposed based on a famous image encryption structure. The major change is to replace the two random phase plates at the input plane and Fourier plane of the encryption system, respectively, with two deformable mirrors that perform on-demand phase modulations. A Gaussian beam is used as an input to replace the conventional image input. We show through theory, simulation, and experiments that the slightly modified image encryption system can be used to achieve arbitrary phase and amplitude beam shaping within the limits of stroke range and influence function of the deformable mirrors. In application, the proposed technique can be used to perform mode conversion between optical beams, generate structured light signals for imaging and scanning, and compensate atmospheric turbulence-induced phase and amplitude beam distortions.

  6. A New Sparse Representation Framework for Reconstruction of an Isotropic High Spatial Resolution MR Volume From Orthogonal Anisotropic Resolution Scans.

    PubMed

    Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K

    2017-05-01

    In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.

  7. CMOS single-stage input-powered bridge rectifier with boost switch and duty cycle control

    NASA Astrophysics Data System (ADS)

    Radzuan, Roskhatijah; Mohd Salleh, Mohd Khairul; Hamzah, Mustafar Kamal; Ab Wahab, Norfishah

    2017-06-01

    This paper presents a single-stage input-powered bridge rectifier with boost switch for wireless-powered devices such as biomedical implants and wireless sensor nodes. Realised using CMOS process technology, it employs a duty cycle switch control to achieve high output voltage using boost technique, leading to a high output power conversion. It has only six external connections with the boost inductance. The input frequency of the bridge rectifier is set at 50 Hz, while the switching frequency is 100 kHz. The proposed circuit is fabricated on a single 0.18-micron CMOS die with a space area of 0.024 mm2. The simulated and measured results show good agreement.

  8. Calculation of the final energy demand for the Federal Republic of Germany with the simulation model MEDEE-2

    NASA Astrophysics Data System (ADS)

    Loeffler, U.; Weible, H.

    1981-08-01

    The final energy demand for the Federal Republic of Germany was calculated. The model MEDEE-2 describes, in relationship to a given distribution of the production of single industrial sectors, of energy specific values and of population development, the final energy consumption of the domestic, service industry and transportation sectors for a given region. The input data, consisting of constants and variables, and the proceeding, by which the projections for the input data of single sectors are performed, are discussed. The results of the calculations are presented and are compared. The sensitivity of single results in relation to the variation of input values is analyzed.

  9. Analysis of severe storm data

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1983-01-01

    The Mesoscale Analysis and Space Sensor (MASS) Data Management and Analysis System developed by Atsuko Computing International (ACI) on the MASS HP-1000 Computer System within the Systems Dynamics Laboratory of the Marshall Space Flight Center is described. The MASS Data Management and Analysis System was successfully implemented and utilized daily by atmospheric scientists to graphically display and analyze large volumes of conventional and satellite derived meteorological data. The scientists can process interactively various atmospheric data (Sounding, Single Level, Gird, and Image) by utilizing the MASS (AVE80) share common data and user inputs, thereby reducing overhead, optimizing execution time, and thus enhancing user flexibility, useability, and understandability of the total system/software capabilities. In addition ACI installed eight APPLE III graphics/imaging computer terminals in individual scientist offices and integrated them into the MASS HP-1000 Computer System thus providing significant enhancement to the overall research environment.

  10. Dual regression physiological modeling of resting-state EPI power spectra: Effects of healthy aging.

    PubMed

    Viessmann, Olivia; Möller, Harald E; Jezzard, Peter

    2018-02-02

    Aging and disease-related changes in the arteriovasculature have been linked to elevated levels of cardiac cycle-induced pulsatility in the cerebral microcirculation. Functional magnetic resonance imaging (fMRI), acquired fast enough to unalias the cardiac frequency contributions, can be used to study these physiological signals in the brain. Here, we propose an iterative dual regression analysis in the frequency domain to model single voxel power spectra of echo planar imaging (EPI) data using external recordings of the cardiac and respiratory cycles as input. We further show that a data-driven variant, without external physiological traces, produces comparable results. We use this framework to map and quantify cardiac and respiratory contributions in healthy aging. We found a significant increase in the spatial extent of cardiac modulated white matter voxels with age, whereas the overall strength of cardiac-related EPI power did not show an age effect. Copyright © 2018. Published by Elsevier Inc.

  11. Active Sensor for Microwave Tissue Imaging with Bias-Switched Arrays.

    PubMed

    Foroutan, Farzad; Nikolova, Natalia K

    2018-05-06

    A prototype of a bias-switched active sensor was developed and measured to establish the achievable dynamic range in a new generation of active arrays for microwave tissue imaging. The sensor integrates a printed slot antenna, a low-noise amplifier (LNA) and an active mixer in a single unit, which is sufficiently small to enable inter-sensor separation distance as small as 12 mm. The sensor’s input covers the bandwidth from 3 GHz to 7.5 GHz. Its output intermediate frequency (IF) is 30 MHz. The sensor is controlled by a simple bias-switching circuit, which switches ON and OFF the bias of the LNA and the mixer simultaneously. It was demonstrated experimentally that the dynamic range of the sensor, as determined by its ON and OFF states, is 109 dB and 118 dB at resolution bandwidths of 1 kHz and 100 Hz, respectively.

  12. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array

    PubMed Central

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-01-01

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile’s rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm. PMID:26978372

  13. Characterization of Fluorescent Polystyrene Microspheres for Advanced Flow Diagnostics

    NASA Technical Reports Server (NTRS)

    Maisto, Pietro M. F.; Lowe, K. Todd; Byun, Guibo; Simpson, Roger; Vercamp, Max; Danley, Jason E.; Koh, Brian; Tiemsin, Pacita; Danehy, Paul M.; Wohl, Christopher J.

    2013-01-01

    Fluorescent dye-doped polystyrene latex microspheres (PSLs) are being developed for velocimetry and scalar measurements in variable property flows. Two organic dyes, Rhodamine B (RhB) and dichlorofluorescence (DCF), are examined to assess laser-induced fluorescence (LIF) properties for flow imaging applications and single-shot temperature measurements. A major interest in the current research is the application of safe dyes, thus DCF is of particular interest, while RhB is used as a benchmark. Success is demonstrated for single-point laser Doppler velocimetry (LDV) and also imaging fluorescence, excited via a continuous wave 2 W laser beam, for exposures down to 10 ms. In contrast, when exciting with a pulsed Nd:YAG laser at 200 mJ/pulse, no fluorescence was detected, even when integrating tens of pulses. We show that this is due to saturation of the LIF signal at relatively low excitation intensities, 4-5 orders of magnitude lower than the pulsed laser intensity. A two-band LIF technique is applied in a heated jet, indicating that the technique effectively removes interfering inputs such as particle diameter variation. Temperature measurement uncertainties are estimated based upon the variance measured for the two-band LIF intensity ratio and the achievable dye temperature sensitivity, indicating that particles developed to date may provide about +/-12.5 C precision, while future improvements in dye temperature sensitivity and signal quality may enable single-shot temperature measurements with sub-degree precision.

  14. Ice flood velocity calculating approach based on single view metrology

    NASA Astrophysics Data System (ADS)

    Wu, X.; Xu, L.

    2017-02-01

    Yellow River is the river in which the ice flood occurs most frequently in China, hence, the Ice flood forecasting has great significance for the river flood prevention work. In various ice flood forecast models, the flow velocity is one of the most important parameters. In spite of the great significance of the flow velocity, its acquisition heavily relies on manual observation or deriving from empirical formula. In recent years, with the high development of video surveillance technology and wireless transmission network, the Yellow River Conservancy Commission set up the ice situation monitoring system, in which live videos can be transmitted to the monitoring center through 3G mobile networks. In this paper, an approach to get the ice velocity based on single view metrology and motion tracking technique using monitoring videos as input data is proposed. First of all, River way can be approximated as a plane. On this condition, we analyze the geometry relevance between the object side and the image side. Besides, we present the principle to measure length in object side from image. Secondly, we use LK optical flow which support pyramid data to track the ice in motion. Combining the result of camera calibration and single view metrology, we propose a flow to calculate the real velocity of ice flood. At last we realize a prototype system by programming and use it to test the reliability and rationality of the whole solution.

  15. Investigating the Role of Global Histogram Equalization Technique for 99mTechnetium-Methylene diphosphonate Bone Scan Image Enhancement.

    PubMed

    Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    99m Technetium-methylene diphosphonate ( 99m Tc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99m Tc-MDP-bone scan images. A set of 89 low contrast 99m Tc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t -test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful.

  16. Separation of input function for rapid measurement of quantitative CMRO2 and CBF in a single PET scan with a dual tracer administration method

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Watabe, Hiroshi; Hayashi, Takuya; Iida, Hidehiro

    2007-04-01

    Cerebral metabolic rate of oxygen (CMRO2), oxygen extraction fraction (OEF) and cerebral blood flow (CBF) images can be quantified using positron emission tomography (PET) by administrating 15O-labelled water (H152O) and oxygen (15O2). Conventionally, those images are measured with separate scans for three tracers C15O for CBV, H152O for CBF and 15O2 for CMRO2, and there are additional waiting times between the scans in order to minimize the influence of the radioactivity from the previous tracers, which results in a relatively long study period. We have proposed a dual tracer autoradiographic (DARG) approach (Kudomi et al 2005), which enabled us to measure CBF, OEF and CMRO2 rapidly by sequentially administrating H152O and 15O2 within a short time. Because quantitative CBF and CMRO2 values are sensitive to arterial input function, it is necessary to obtain accurate input function and a drawback of this approach is to require separation of the measured arterial blood time-activity curve (TAC) into pure water and oxygen input functions under the existence of residual radioactivity from the first injected tracer. For this separation, frequent manual sampling was required. The present paper describes two calculation methods: namely a linear and a model-based method, to separate the measured arterial TAC into its water and oxygen components. In order to validate these methods, we first generated a blood TAC for the DARG approach by combining the water and oxygen input functions obtained in a series of PET studies on normal human subjects. The combined data were then separated into water and oxygen components by the present methods. CBF and CMRO2 were calculated using those separated input functions and tissue TAC. The quantitative accuracy in the CBF and CMRO2 values by the DARG approach did not exceed the acceptable range, i.e., errors in those values were within 5%, when the area under the curve in the input function of the second tracer was larger than half of the first one. Bias and deviation in those values were also compatible to that of the conventional method, when noise was imposed on the arterial TAC. We concluded that the present calculation based methods could be of use for quantitatively calculating CBF and CMRO2 with the DARG approach.

  17. Graphics with Special Interfaces for Disabled People.

    ERIC Educational Resources Information Center

    Tronconi, A.; And Others

    The paper describes new software and special input devices to allow physically impaired children to utilize the graphic capabilities of personal computers. Special input devices for computer graphics access--the voice recognition card, the single switch, or the mouse emulator--can be used either singly or in combination by the disabled to control…

  18. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    PubMed

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  19. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  20. An Improved Pathological Brain Detection System Based on Two-Dimensional PCA and Evolutionary Extreme Learning Machine.

    PubMed

    Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar

    2017-12-07

    Pathological brain detection has made notable stride in the past years, as a consequence many pathological brain detection systems (PBDSs) have been proposed. But, the accuracy of these systems still needs significant improvement in order to meet the necessity of real world diagnostic situations. In this paper, an efficient PBDS based on MR images is proposed that markedly improves the recent results. The proposed system makes use of contrast limited adaptive histogram equalization (CLAHE) to enhance the quality of the input MR images. Thereafter, two-dimensional PCA (2DPCA) strategy is employed to extract the features and subsequently, a PCA+LDA approach is used to generate a compact and discriminative feature set. Finally, a new learning algorithm called MDE-ELM is suggested that combines modified differential evolution (MDE) and extreme learning machine (ELM) for segregation of MR images as pathological or healthy. The MDE is utilized to optimize the input weights and hidden biases of single-hidden-layer feed-forward neural networks (SLFN), whereas an analytical method is used for determining the output weights. The proposed algorithm performs optimization based on both the root mean squared error (RMSE) and norm of the output weights of SLFNs. The suggested scheme is benchmarked on three standard datasets and the results are compared against other competent schemes. The experimental outcomes show that the proposed scheme offers superior results compared to its counterparts. Further, it has been noticed that the proposed MDE-ELM classifier obtains better accuracy with compact network architecture than conventional algorithms.

  1. CCCT - NCTN Steering Committees - Clinical Imaging

    Cancer.gov

    The Clinical Imaging Steering Committee serves as a forum for the extramural imaging and oncology communities to provide strategic input to the NCI regarding its significant investment in imaging activities in clinical trials.

  2. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  3. The Engineer Topographic Laboratories /ETL/ hybrid optical/digital image processor

    NASA Astrophysics Data System (ADS)

    Benton, J. R.; Corbett, F.; Tuft, R.

    1980-01-01

    An optical-digital processor for generalized image enhancement and filtering is described. The optical subsystem is a two-PROM Fourier filter processor. Input imagery is isolated, scaled, and imaged onto the first PROM; this input plane acts like a liquid gate and serves as an incoherent-to-coherent converter. The image is transformed onto a second PROM which also serves as a filter medium; filters are written onto the second PROM with a laser scanner in real time. A solid state CCTV camera records the filtered image, which is then digitized and stored in a digital image processor. The operator can then manipulate the filtered image using the gray scale and color remapping capabilities of the video processor as well as the digital processing capabilities of the minicomputer.

  4. Image segmentation via foreground and background semantic descriptors

    NASA Astrophysics Data System (ADS)

    Yuan, Ding; Qiang, Jingjing; Yin, Jihao

    2017-09-01

    In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.

  5. Quantification of 18F-fluorocholine kinetics in patients with prostate cancer.

    PubMed

    Verwer, Eline E; Oprea-Lager, Daniela E; van den Eertwegh, Alfons J M; van Moorselaar, Reindert J A; Windhorst, Albert D; Schwarte, Lothar A; Hendrikse, N Harry; Schuit, Robert C; Hoekstra, Otto S; Lammertsma, Adriaan A; Boellaard, Ronald

    2015-03-01

    Choline kinase is upregulated in prostate cancer, resulting in increased (18)F-fluoromethylcholine uptake. This study used pharmacokinetic modeling to validate the use of simplified methods for quantification of (18)F-fluoromethylcholine uptake in a routine clinical setting. Forty-minute dynamic PET/CT scans were acquired after injection of 204 ± 9 MBq of (18)F-fluoromethylcholine, from 8 patients with histologically proven metastasized prostate cancer. Plasma input functions were obtained using continuous arterial blood-sampling as well as using image-derived methods. Manual arterial blood samples were used for calibration and correction for plasma-to-blood ratio and metabolites. Time-activity curves were derived from volumes of interest in all visually detectable lymph node metastases. (18)F-fluoromethylcholine kinetics were studied by nonlinear regression fitting of several single- and 2-tissue plasma input models to the time-activity curves. Model selection was based on the Akaike information criterion and measures of robustness. In addition, the performance of several simplified methods, such as standardized uptake value (SUV), was assessed. Best fits were obtained using an irreversible compartment model with blood volume parameter. Parent fractions were 0.12 ± 0.4 after 20 min, necessitating individual metabolite corrections. Correspondence between venous and arterial parent fractions was low as determined by the intraclass correlation coefficient (0.61). Results for image-derived input functions that were obtained from volumes of interest in blood-pool structures distant from tissues of high (18)F-fluoromethylcholine uptake yielded good correlation to those for the blood-sampling input functions (R(2) = 0.83). SUV showed poor correlation to parameters derived from full quantitative kinetic analysis (R(2) < 0.34). In contrast, lesion activity concentration normalized to the integral of the blood activity concentration over time (SUVAUC) showed good correlation (R(2) = 0.92 for metabolite-corrected plasma; 0.65 for whole-blood activity concentrations). SUV cannot be used to quantify (18)F-fluoromethylcholine uptake. A clinical compromise could be SUVAUC derived from 2 consecutive static PET scans, one centered on a large blood-pool structure during 0-30 min after injection to obtain the blood activity concentrations and the other a whole-body scan at 30 min after injection to obtain lymph node activity concentrations. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  6. Oblique reconstructions in tomosynthesis. II. Super-resolution

    PubMed Central

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2013-01-01

    Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes. Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system. Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest detectable frequency on pitch followed the same trend as the analytical model. It was demonstrated that super-resolution is not achievable if the pitch of the object approaches 90°, corresponding to the case in which the test frequency is perpendicular to the breast support. Only low frequency objects are detectable at pitches close to 90°. Conclusions: This work provides a platform for investigating super-resolution in oblique reconstructions for tomosynthesis. In breast imaging, this study should have applications in visualizing microcalcifications and other subtle signs of cancer. PMID:24320445

  7. Overview of deep learning in medical imaging.

    PubMed

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.

  8. Automatic dynamic range adjustment for ultrasound B-mode imaging.

    PubMed

    Lee, Yeonhwa; Kang, Jinbum; Yoo, Yangmo

    2015-02-01

    In medical ultrasound imaging, dynamic range (DR) is defined as the difference between the maximum and minimum values of the displayed signal to display and it is one of the most essential parameters that determine its image quality. Typically, DR is given with a fixed value and adjusted manually by operators, which leads to low clinical productivity and high user dependency. Furthermore, in 3D ultrasound imaging, DR values are unable to be adjusted during 3D data acquisition. A histogram matching method, which equalizes the histogram of an input image based on that from a reference image, can be applied to determine the DR value. However, it could be lead to an over contrasted image. In this paper, a new Automatic Dynamic Range Adjustment (ADRA) method is presented that adaptively adjusts the DR value by manipulating input images similar to a reference image. The proposed ADRA method uses the distance ratio between the log average and each extreme value of a reference image. To evaluate the performance of the ADRA method, the similarity between the reference and input images was measured by computing a correlation coefficient (CC). In in vivo experiments, the CC values were increased by applying the ADRA method from 0.6872 to 0.9870 and from 0.9274 to 0.9939 for kidney and liver data, respectively, compared to the fixed DR case. In addition, the proposed ADRA method showed to outperform the histogram matching method with in vivo liver and kidney data. When using 3D abdominal data with 70 frames, while the CC value from the ADRA method is slightly increased (i.e., 0.6%), the proposed method showed improved image quality in the c-plane compared to its fixed counterpart, which suffered from a shadow artifact. These results indicate that the proposed method can enhance image quality in 2D and 3D ultrasound B-mode imaging by improving the similarity between the reference and input images while eliminating unnecessary manual interaction by the user. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Visual Input to the Drosophila Central Complex by Developmentally and Functionally Distinct Neuronal Populations.

    PubMed

    Omoto, Jaison Jiro; Keleş, Mehmet Fatih; Nguyen, Bao-Chau Minh; Bolanos, Cheyenne; Lovick, Jennifer Kelly; Frye, Mark Arthur; Hartenstein, Volker

    2017-04-24

    The Drosophila central brain consists of stereotyped neural lineages, developmental-structural units of macrocircuitry formed by the sibling neurons of single progenitors called neuroblasts. We demonstrate that the lineage principle guides the connectivity and function of neurons, providing input to the central complex, a collection of neuropil compartments important for visually guided behaviors. One of these compartments is the ellipsoid body (EB), a structure formed largely by the axons of ring (R) neurons, all of which are generated by a single lineage, DALv2. Two further lineages, DALcl1 and DALcl2, produce neurons that connect the anterior optic tubercle, a central brain visual center, with R neurons. Finally, DALcl1/2 receive input from visual projection neurons of the optic lobe medulla, completing a three-legged circuit that we call the anterior visual pathway (AVP). The AVP bears a fundamental resemblance to the sky-compass pathway, a visual navigation circuit described in other insects. Neuroanatomical analysis and two-photon calcium imaging demonstrate that DALcl1 and DALcl2 form two parallel channels, establishing connections with R neurons located in the peripheral and central domains of the EB, respectively. Although neurons of both lineages preferentially respond to bright objects, DALcl1 neurons have small ipsilateral, retinotopically ordered receptive fields, whereas DALcl2 neurons share a large excitatory receptive field in the contralateral hemifield. DALcl2 neurons become inhibited when the object enters the ipsilateral hemifield and display an additional excitation after the object leaves the field of view. Thus, the spatial position of a bright feature, such as a celestial body, may be encoded within this pathway. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Self-aligning and compressed autosophy video databases

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  11. Development and Validation of the Suprathreshold Stochastic Resonance-Based Image Processing Method for the Detection of Abdomino-pelvic Tumor on PET/CT Scans.

    PubMed

    Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.

  12. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    PubMed Central

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-01-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. PMID:24223474

  13. Deep neural network using color and synthesized three-dimensional shape for face recognition

    NASA Astrophysics Data System (ADS)

    Rhee, Seon-Min; Yoo, ByungIn; Han, Jae-Joon; Hwang, Wonjun

    2017-03-01

    We present an approach for face recognition using synthesized three-dimensional (3-D) shape information together with two-dimensional (2-D) color in a deep convolutional neural network (DCNN). As 3-D facial shape is hardly affected by the extrinsic 2-D texture changes caused by illumination, make-up, and occlusions, it could provide more reliable complementary features in harmony with the 2-D color feature in face recognition. Unlike other approaches that use 3-D shape information with the help of an additional depth sensor, our approach generates a personalized 3-D face model by using only face landmarks in the 2-D input image. Using the personalized 3-D face model, we generate a frontalized 2-D color facial image as well as 3-D facial images (e.g., a depth image and a normal image). In our DCNN, we first feed 2-D and 3-D facial images into independent convolutional layers, where the low-level kernels are successfully learned according to their own characteristics. Then, we merge them and feed into higher-level layers under a single deep neural network. Our proposed approach is evaluated with labeled faces in the wild dataset and the results show that the error rate of the verification rate at false acceptance rate 1% is improved by up to 32.1% compared with the baseline where only a 2-D color image is used.

  14. KENIS: a high-performance thermal imager developed using the OSPREY IR detector

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Baker, Ian M.

    2000-07-01

    `KENIS', a complete, high performance, compact and lightweight thermal imager, is built around the `OSPREY' infrared detector from BAE systems Infrared Ltd. The `OSPREY' detector uses a 384 X 288 element CMT array with a 20 micrometers pixel size and cooled to 120 K. The relatively small pixel size results in very compact cryogenics and optics, and the relatively high operating temperature provides fast start-up time, low power consumption and long operating life. Requiring single input supply voltage and consuming less than 30 watts of power, the thermal imager generates both analogue and digital format outputs. The `KENIS' lens assembly features a near diffraction limited dual field-of-view optical system that has been designed to be athermalized and switches between fields in less than one second. The `OSPREY' detector produces near background limited performance with few defects and has special, pixel level circuitry to eliminate crosstalk and blooming effects. This, together with signal processing based on an effective two-point fixed pattern noise correction algorithm, results in high quality imagery and a thermal imager that is suitable for most traditional thermal imaging applications. This paper describes the rationale used in the development of the `KENIS' thermal imager, and highlights the potential performance benefits to the user's system, primarily gained by selecting the `OSPREY' infra-red detector within the core of the thermal imager.

  15. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-10-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.

  16. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  17. Omniview motionless camera orientation system

    NASA Technical Reports Server (NTRS)

    Martin, H. Lee (Inventor); Kuban, Daniel P. (Inventor); Zimmermann, Steven D. (Inventor); Busko, Nicholas (Inventor)

    2010-01-01

    An apparatus and method is provided for converting digital images for use in an imaging system. The apparatus includes a data memory which stores digital data representing an image having a circular or spherical field of view such as an image captured by a fish-eye lens, a control input for receiving a signal for selecting a portion of the image, and a converter responsive to the control input for converting digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. Various methods include the steps of storing digital data representing an image having a circular or spherical field of view, selecting a portion of the image, and converting the stored digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. In various embodiments, the data converter and data conversion step may use an orthogonal set of transformation algorithms.

  18. Design, Fabrication, and Packaging of Mach-Zehnder Interferometers for Biological Sensing Applications

    NASA Astrophysics Data System (ADS)

    Novak, Joseph

    Optical biological sensors are widely used in the fields of medical testing, water treatment and safety, gene identification, and many others due to advances in nanofabrication technology. This work focuses on the design of fiber-coupled Mach-Zehnder Interferometer (MZI) based biosensors fabricated on silicon-on-insulator (SOI) wafer. Silicon waveguide sensors are designed with multimode and single-mode dimensions. Input coupling efficiency is investigated by design of various taper structures. Integration processing and packaging is performed for fiber attachment and enhancement of input coupling efficiency. Optical guided-wave sensors rely on single-mode operation to extract an induced phase-shift from the output signal. A silicon waveguide MZI sensor designed and fabricated for both multimode and single-mode dimensions. Sensitivity of the sensors is analyzed for waveguide dimensions and materials. An s-bend structure is designed for the multimode waveguide to eliminate higher-order mode power as an alternative to single-mode confinement. Single-mode confinement is experimentally demonstrated through near field imaging of waveguide output. Y-junctions are designed for 3dB power splitting to the MZI arms and for power recombination after sensing to utilize the interferometric function of the MZI. Ultra-short 10microm taper structures with curved geometries are designed to improve insertion loss from fiber-to-chip without significantly increasing device area and show potential for applications requiring misalignment tolerance. An novel v-groove process is developed for self-aligned integration of fiber grooves for attachment to sensor chips. Thermal oxidation at temperatures from 1050-1150°C during groove processing creates an SiO2 layer on the waveguide end facet to protect the waveguide facet during integration etch processing without additional e-beam lithography processing. Experimental results show improvement of insertion loss compared to dicing preparation and Focused Ion Beam methods using the thermal oxidation process.

  19. Detection and Rectification of Distorted Fingerprints.

    PubMed

    Si, Xuanbin; Feng, Jianjiang; Zhou, Jie; Luo, Yuxuan

    2015-03-01

    Elastic distortion of fingerprints is one of the major causes for false non-match. While this problem affects all fingerprint recognition applications, it is especially dangerous in negative recognition applications, such as watchlist and deduplication applications. In such applications, malicious users may purposely distort their fingerprints to evade identification. In this paper, we proposed novel algorithms to detect and rectify skin distortion based on a single fingerprint image. Distortion detection is viewed as a two-class classification problem, for which the registered ridge orientation map and period map of a fingerprint are used as the feature vector and a SVM classifier is trained to perform the classification task. Distortion rectification (or equivalently distortion field estimation) is viewed as a regression problem, where the input is a distorted fingerprint and the output is the distortion field. To solve this problem, a database (called reference database) of various distorted reference fingerprints and corresponding distortion fields is built in the offline stage, and then in the online stage, the nearest neighbor of the input fingerprint is found in the reference database and the corresponding distortion field is used to transform the input fingerprint into a normal one. Promising results have been obtained on three databases containing many distorted fingerprints, namely FVC2004 DB1, Tsinghua Distorted Fingerprint database, and the NIST SD27 latent fingerprint database.

  20. Successive Single-Word Utterances and Use of Conversational Input: A Pre-Syntactic Route to Multiword Utterances

    ERIC Educational Resources Information Center

    Herr-Israel, Ellen; McCune, Lorraine

    2011-01-01

    In the period between sole use of single words and majority use of multiword utterances, children draw from their existing productive capability and conversational input to facilitate the eventual outcome of majority use of multiword utterances. During this period, children use word combinations that are not yet mature multiword utterances, termed…

  1. Inputs for subject-specific computational fluid dynamics simulation of blood flow in the mouse aorta.

    PubMed

    Van Doormaal, Mark; Zhou, Yu-Qing; Zhang, Xiaoli; Steinman, David A; Henkelman, R Mark

    2014-10-01

    Mouse models are an important way for exploring relationships between blood hemodynamics and eventual plaque formation. We have developed a mouse model of aortic regurgitation (AR) that produces large changes in plaque burden with charges in hemodynamics [Zhou et al., 2010, "Aortic Regurgitation Dramatically Alters the Distribution of Atherosclerotic Lesions and Enhances Atherogenesis in Mice," Arterioscler. Thromb. Vasc. Biol., 30(6), pp. 1181-1188]. In this paper, we explore the amount of detail needed for realistic computational fluid dynamics (CFD) calculations in this experimental model. The CFD calculations use inputs based on experimental measurements from ultrasound (US), micro computed tomography (CT), and both anatomical magnetic resonance imaging (MRI) and phase contrast MRI (PC-MRI). The adequacy of five different levels of model complexity (a) subject-specific CT data from a single mouse; (b) subject-specific CT centerlines with radii from US; (c) same as (b) but with MRI derived centerlines; (d) average CT centerlines and averaged vessel radius and branching vessels; and (e) same as (d) but with averaged MRI centerlines) is evaluated by demonstrating their impact on relative residence time (RRT) outputs. The paper concludes by demonstrating the necessity of subject-specific geometry and recommends for inputs the use of CT or anatomical MRI for establishing the aortic centerlines, M-mode US for scaling the aortic diameters, and a combination of PC-MRI and Doppler US for estimating the spatial and temporal characteristics of the input wave forms.

  2. Using Imaging Spectrometry to Approach Crop Classification from a Water Management Perspective

    NASA Astrophysics Data System (ADS)

    Shivers, S.; Roberts, D. A.

    2017-12-01

    We use hyperspectral remote sensing imagery to classify crops in the Central Valley of California at a level that would be of use to water managers. In California irrigated agriculture uses 80 percent of the state's water supply with differences in water application rate varying by as large as a factor of three, dependent on crop type. Therefore, accurate water resource accounting is dependent upon accurate crop mapping. While on-the-ground crop accounting at the county level requires significant labor and time inputs, remote sensing has the potential to map crops over a greater spatial area with more frequent time intervals. Specifically, imaging spectrometry with its wide spectral range has the ability to detect small spectral differences at the field-level scale that may be indiscernible to multispectral sensors such as Landsat. In this study, crops in the Central Valley were classified into nine categories defined and used by the California Department of Water Resources as having similar water usages. We used the random forest classifier on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery from June 2013, 2014 and 2015 to analyze accuracy of multi-temporal images and to investigate the extent to which cropping patterns have changed over the course of the 2013-2015 drought. Initial results show accuracies of over 90% for all three years, indicating that hyperspectral imagery has the potential to identify crops by water use group at a single time step with a single sensor, allowing cropping patterns to be monitored in anticipation of water needs.

  3. Contextual descriptors and neural networks for scene analysis in VHR SAR images

    NASA Astrophysics Data System (ADS)

    Del Frate, Fabio; Picchiani, Matteo; Falasco, Alessia; Schiavon, Giovanni

    2016-10-01

    The development of SAR technology during the last decade has made it possible to collect a huge amount of data over many regions of the world. In particular, the availability of SAR images from different sensors, with metric or sub-metric spatial resolution, offers novel opportunities in different fields as land cover, urban monitoring, soil consumption etc. On the other hand, automatic approaches become crucial for the exploitation of such a huge amount of information. In such a scenario, especially if single polarization images are considered, the main issue is to select appropriate contextual descriptors, since the backscattering coefficient of a single pixel may not be sufficient to classify an object on the scene. In this paper a comparison among three different approaches for contextual features definition is presented so as to design optimum procedures for VHR SAR scene understanding. The first approach is based on Gray Level Co- Occurrence Matrix since it is widely accepted and several studies have used it for land cover classification with SAR data. The second approach is based on the Fourier spectra and it has been already proposed with positive results for this kind of problems, the third one is based on Auto-associative Neural Networks which have been already proven effective for features extraction from polarimetric SAR images. The three methods are evaluated in terms of the accuracy of the classified scene when the features extracted using each method are considered as input to a neural network classificator and applied on different Cosmo-SkyMed spotlight products.

  4. Transfer learning for diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Benson, Jeremy; Carrillo, Hector; Wigdahl, Jeff; Nemeth, Sheila; Maynard, John; Zamora, Gilberto; Barriga, Simon; Estrada, Trilce; Soliz, Peter

    2018-03-01

    Diabetic Retinopathy (DR)1, 2 is a leading cause of blindness worldwide and is estimated to threaten the vision of nearly 200 million by 2030.3 To work with the ever-increasing population, the use of image processing algorithms to screen for those at risk has been on the rise. Research-oriented solutions have proven effective in classifying images with or without DR, but often fail to address the true need of the clinic - referring only those who need to be seen by a specialist, and reading every single case. In this work, we leverage an array of image pre-preprocessing techniques, as well as Transfer Learning to re-purpose an existing deep network for our tasks in DR. We train, test, and validate our system on 979 clinical cases, achieving a 95% Area Under the Curve (AUC) for referring Severe DR with an equal error Sensitivity and Specificity of 90%. Our system does not reject any images based on their quality, and is agnostic in terms of eye side and field. These results show that general purpose classifiers can, with the right type of input, have a major impact in clinical environments or for teams lacking access to large volumes of data or high-throughput supercomputers.

  5. IDeF-X ECLAIRs: A CMOS ASIC for the Readout of CdTe and CdZnTe Detectors for High Resolution Spectroscopy

    NASA Astrophysics Data System (ADS)

    Gevin, Olivier; Baron, Pascal; Coppolani, Xavier; Daly, FranÇois; Delagnes, Eric; Limousin, Olivier; Lugiez, Francis; Meuris, Aline; Pinsard, FrÉdÉric; Renaud, Diana

    2009-08-01

    The very last member of the IDeF-X ASIC family is presented: IDeF-X ECLAIRs is a 32-channel front end ASIC designed for the readout of Cadmium Telluride (CdTe) and Cadmium Zinc Telluride (CdZnTe) Detectors. Thanks to its noise performance (Equivalent Noise Charge floor of 33 e- rms) and to its radiation hardened design (Single Event Latchup Linear Energy Transfer threshold of 56 MeV.cm2.mg-1), the chip is well suited for soft X-rays energy discrimination and high energy resolution, ldquospace proof,rdquo hard X-ray spectroscopy. We measured an energy low threshold of less than 4 keV with a 10 pF input capacitor and a minimal reachable sensitivity of the Equivalent Noise Charge (ENC) to input capacitance of less than 7 e-/pF obtained with a 6 mus peak time. IDeF-X ECLAIRs will be used for the readout of 6400 CdTe Schottky monopixel detectors of the 2D coded mask imaging telescope ECLAIRs aboard the SVOM satellite. IDeF-X ECLAIRs (or IDeF-X V2) has also been designed for the readout of a pixelated CdTe detector in the miniature spectro-imager prototype Caliste 256 that is currently foreseen for the high energy detector module of the Simbol-X mission.

  6. Investigating the Role of Global Histogram Equalization Technique for 99mTechnetium-Methylene diphosphonate Bone Scan Image Enhancement

    PubMed Central

    Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    Purpose of the Study: 99mTechnetium-methylene diphosphonate (99mTc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99mTc-MDP-bone scan images. Materials and Methods: A set of 89 low contrast 99mTc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. Results: This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t-test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. Conclusion: GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful. PMID:29142344

  7. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  8. Semi-automatic medical image segmentation with adaptive local statistics in Conditional Random Fields framework.

    PubMed

    Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S

    2008-01-01

    Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.

  9. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    NASA Astrophysics Data System (ADS)

    Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.

  10. Modeling and comparative study of fluid velocities in heterogeneous rocks

    NASA Astrophysics Data System (ADS)

    Hingerl, Ferdinand F.; Romanenko, Konstantin; Pini, Ronny; Balcom, Bruce; Benson, Sally

    2013-04-01

    Detailed knowledge of the distribution of effective porosity and fluid velocities in heterogeneous rock samples is crucial for understanding and predicting spatially resolved fluid residence times and kinetic reaction rates of fluid-rock interactions. The applicability of conventional MRI techniques to sedimentary rocks is limited by internal magnetic field gradients and short spin relaxation times. The approach developed at the UNB MRI Centre combines the 13-interval Alternating-Pulsed-Gradient Stimulated-Echo (APGSTE) scheme and three-dimensional Single Point Ramped Imaging with T1 Enhancement (SPRITE). These methods were designed to reduce the errors due to effects of background gradients and fast transverse relaxation. SPRITE is largely immune to time-evolution effects resulting from background gradients, paramagnetic impurities and chemical shift. Using these techniques quantitative 3D porosity maps as well as single-phase fluid velocity fields in sandstone core samples were measured. Using a new Magnetic Resonance Imaging technique developed at the MRI Centre at UNB, we created 3D maps of porosity distributions as well as single-phase fluid velocity distributions of sandstone rock samples. Then, we evaluated the applicability of the Kozeny-Carman relationship for modeling measured fluid velocity distributions in sandstones samples showing meso-scale heterogeneities using two different modeling approaches. The MRI maps were used as reference points for the modeling approaches. For the first modeling approach, we applied the Kozeny-Carman relationship to the porosity distributions and computed respective permeability maps, which in turn provided input for a CFD simulation - using the Stanford CFD code GPRS - to compute averaged velocity maps. The latter were then compared to the measured velocity maps. For the second approach, the measured velocity distributions were used as input for inversely computing permeabilities using the GPRS CFD code. The computed permeabilities were then correlated with the ones based on the porosity maps and the Kozeny-Carman relationship. The findings of the comparative modeling study are discussed and its potential impact on the modeling of fluid residence times and kinetic reaction rates of fluid-rock interactions in rocks containing meso-scale heterogeneities are reviewed.

  11. Collaborative identification method for sea battlefield target based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong

    2018-03-01

    The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that

  12. Manipulating molecular quantum states with classical metal atom inputs: demonstration of a single molecule NOR logic gate.

    PubMed

    Soe, We-Hyo; Manzano, Carlos; Renaud, Nicolas; de Mendoza, Paula; De Sarkar, Abir; Ample, Francisco; Hliwa, Mohamed; Echavarren, Antonio M; Chandrasekhar, Natarajan; Joachim, Christian

    2011-02-22

    Quantum states of a trinaphthylene molecule were manipulated by putting its naphthyl branches in contact with single Au atoms. One Au atom carries 1-bit of classical information input that is converted into quantum information throughout the molecule. The Au-trinaphthylene electronic interactions give rise to measurable energy shifts of the molecular electronic states demonstrating a NOR logic gate functionality. The NOR truth table of the single molecule logic gate was characterized by means of scanning tunnelling spectroscopy.

  13. Multiplexing 32,000 spectra onto 8 detectors: the HARMONI field splitting, image slicing, and wavelength selecting optics

    NASA Astrophysics Data System (ADS)

    Tecza, Matthias; Thatte, Niranjan; Clarke, Fraser; Freeman, David; Kosmalski, Johan

    2012-09-01

    HARMONI, the High Angular Resolution Monolithic Optical & Near-infrared Integral field spectrograph is one of two first-light instruments for the European Extremely Large Telescope. Over a 256x128 pixel field-of-view HARMONI will simultaneously measure approximately 32,000 spectra. Each spectrum is about 4000 spectral pixels long, and covers a selectable part of the 0.47-2.45 μm wavelength range at resolving powers of either R≍4000, 10000, or 20000. All 32,000 spectra are imaged onto eight HAWAII4RG detectors using a multiplexing scheme that divides the input field into four sub-fields, each imaged onto one image slicer that in turn re-arranges a single sub-field into two long exit slits feeding one spectrograph each. In total we require eight spectrographs, each with one HAWAII4RG detector. A system of articulated and exchangeable fold-mirrors and VPH gratings allows one to select different spectral resolving powers and wavelength ranges of interest while keeping a fixed geometry between the spectrograph collimator and camera avoiding the need for an articulated grating and camera. In this paper we describe both the field splitting and image slicing optics as well as the optics that will be used to select both spectral resolving power and wavelength range.

  14. Performance of SEM scintillation detector evaluated by modulation transfer function and detective quantum efficiency function.

    PubMed

    Bok, Jan; Schauer, Petr

    2014-01-01

    In the paper, the SEM detector is evaluated by the modulation transfer function (MTF) which expresses the detector's influence on the SEM image contrast. This is a novel approach, since the MTF was used previously to describe only the area imaging detectors, or whole imaging systems. The measurement technique and calculation of the MTF for the SEM detector are presented. In addition, the measurement and calculation of the detective quantum efficiency (DQE) as a function of the spatial frequency for the SEM detector are described. In this technique, the time modulated e-beam is used in order to create well-defined input signal for the detector. The MTF and DQE measurements are demonstrated on the Everhart-Thornley scintillation detector. This detector was alternated using the YAG:Ce, YAP:Ce, and CRY18 single-crystal scintillators. The presented MTF and DQE characteristics show good imaging properties of the detectors with the YAP:Ce or CRY18 scintillator, especially for a specific type of the e-beam scan. The results demonstrate the great benefit of the description of SEM detectors using the MTF and DQE. In addition, point-by-point and continual-sweep e-beam scans in SEM were discussed and their influence on the image quality was revealed using the MTF. © 2013 Wiley Periodicals, Inc.

  15. Hybrid imaging worldwide-challenges and opportunities for the developing world: a report of a Technical Meeting organized by IAEA.

    PubMed

    Kashyap, Ravi; Dondi, Maurizio; Paez, Diana; Mariani, Guliano

    2013-05-01

    The growth in nuclear medicine, in the past decade, is largely due to hybrid imaging, specifically single-photon emission tomography-computed tomography (SPECT-CT) and positron emission tomography-computed tomography (PET-CT). Introduction and use of hybrid imaging has been growing at a fast pace. This has led to many challenges and opportunities to the personnel dealing with it. The International Atomic Energy Agency (IAEA) keeps a close watch on the trends in applications of nuclear techniques in health by many ways, including obtaining inputs from member states and professional societies. In 2012, a Technical Meeting on trends in hybrid imaging was organized by IAEA to understand the current status and trends of hybrid imaging using nuclear techniques, its role in clinical practice, and associated educational needs and challenges. Perspective of scientific societies and professionals from all the regions of the world was obtained. Heterogeneity in value, educational needs, and access was noted and the drivers of this heterogeneity were discussed. This article presents the key points shared during the technical meeting, focusing primarily on SPECT-CT and PET-CT, and shares the action plan for IAEA to deal with heterogeneity as suggested by the participants. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  17. Imaging of a Defect in Thin Plates Using the Time Reversal of Single Mode Lamb Waves

    NASA Astrophysics Data System (ADS)

    Jeong, Hyunjo; Lee, Jung-Sik; Bae, Sung-Min

    2011-06-01

    This paper presents an analytical investigation for a baseline-free imaging of a defect in plate-like structures using the time-reversal of Lamb waves. We first consider the flexural wave (A0 mode) propagation in a plate containing a defect, and reception and time reversal process of the output signal at the receiver. The received output signal is then composed of two parts: a directly propagated wave and a scattered wave from the defect. The time reversal of these waves recovers the original input signal, and produces two additional sidebands that contain the time-of-flight information on the defect location. One of the side band signals is then extracted as a pure defect signal. A defect localization image is then constructed from a beamforming technique based on the time-frequency analysis of the side band signal for each transducer pair in a network of sensors. The simulation results show that the proposed scheme enables the accurate, baseline-free detection of a defect, so that experimental studies are needed to verify the proposed method and to be applied to real structure.

  18. Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation

    NASA Astrophysics Data System (ADS)

    Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao

    2017-09-01

    Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.

  19. Analysis and visualization of single-trial event-related potentials

    NASA Technical Reports Server (NTRS)

    Jung, T. P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T. J.

    2001-01-01

    In this study, a linear decomposition technique, independent component analysis (ICA), is applied to single-trial multichannel EEG data from event-related potential (ERP) experiments. Spatial filters derived by ICA blindly separate the input data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. Both the data and their decomposition are displayed using a new visualization tool, the "ERP image," that can clearly characterize single-trial variations in the amplitudes and latencies of evoked responses, particularly when sorted by a relevant behavioral or physiological variable. These tools were used to analyze data from a visual selective attention experiment on 28 control subjects plus 22 neurological patients whose EEG records were heavily contaminated with blink and other eye-movement artifacts. Results show that ICA can separate artifactual, stimulus-locked, response-locked, and non-event-related background EEG activities into separate components, a taxonomy not obtained from conventional signal averaging approaches. This method allows: (1) removal of pervasive artifacts of all types from single-trial EEG records, (2) identification and segregation of stimulus- and response-locked EEG components, (3) examination of differences in single-trial responses, and (4) separation of temporally distinct but spatially overlapping EEG oscillatory activities with distinct relationships to task events. The proposed methods also allow the interaction between ERPs and the ongoing EEG to be investigated directly. We studied the between-subject component stability of ICA decomposition of single-trial EEG epochs by clustering components with similar scalp maps and activation power spectra. Components accounting for blinks, eye movements, temporal muscle activity, event-related potentials, and event-modulated alpha activities were largely replicated across subjects. Applying ICA and ERP image visualization to the analysis of sets of single trials from event-related EEG (or MEG) experiments can increase the information available from ERP (or ERF) data. Copyright 2001 Wiley-Liss, Inc.

  20. Method and apparatus for eliminating coherent noise in a coherent energy imaging system without destroying spatial coherence

    NASA Technical Reports Server (NTRS)

    Shulman, A. R. (Inventor)

    1971-01-01

    A method and apparatus for substantially eliminating noise in a coherent energy imaging system, and specifically in a light imaging system of the type having a coherent light source and at least one image lens disposed between an input signal plane and an output image plane are, discussed. The input signal plane is illuminated with the light source by rotating the lens about its optical axis. In this manner, the energy density of coherent noise diffraction patterns as produced by imperfections such as dust and/or bubbles on and/or in the lens is distributed over a ring-shaped area of the output image plane and reduced to a point wherein it can be ignored. The spatial filtering capability of the coherent imaging system is not affected by this noise elimination technique.

  1. Performance assessment of multi-frequency processing of ICU chest images for enhanced visualization of tubes and catheters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.

    2008-03-01

    An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.

  2. Measurement of myocardial blood flow by cardiovascular magnetic resonance perfusion: comparison of distributed parameter and Fermi models with single and dual bolus.

    PubMed

    Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik

    2015-02-17

    Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.

  3. Is the place cell a "supple" engram?

    PubMed

    Routtenberg, Aryeh

    2015-06-01

    This short note, which honors Nobelists O'Keefe and the Mosers, asks how the patterning of inputs to a single place cell regulates its firing. Because the combination of inputs to a single CA1 place cell is very large, the generally accepted view is rejected that inputs to a place cell are relatively restricted, near identical repetition upon re-presentation of the environment. The alternative proposed here is that when any 100 excitatory inputs are fired activating a subset combination, which is a large number, selected from the 30,000 synapses, this leads to CA1 cell firing. The selection of the combination of inputs is a very large number it nonetheless leads to the conclusion that even though the same cell dutifully fires when the animal is in an identical location, the inputs that fire the place cell are nonetheless obligatorily non-identical. This CA1 input combinatorial proposal may help us understand the physiological underpinnings of the memory mechanism arising from supple synapses (Routtenberg (2013), Hippocampus 23:202-206). © 2015 Wiley Periodicals, Inc.

  4. Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images

    NASA Astrophysics Data System (ADS)

    Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow

    2018-04-01

    Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.

  5. Single-stage three-phase boost power factor correction circuit for AC-DC converter

    NASA Astrophysics Data System (ADS)

    Azazi, Haitham Z.; Ahmed, Sayed M.; Lashine, Azza E.

    2018-01-01

    This article presents a single-stage three-phase power factor correction (PFC) circuit for AC-to-DC converter using a single-switch boost regulator, leading to improve the input power factor (PF), reducing the input current harmonics and decreasing the number of required active switches. A novel PFC control strategy which is characterised as a simple and low-cost control circuit was adopted, for achieving a good dynamic performance, unity input PF, and minimising the harmonic contents of the input current, at which it can be applied to low/medium power converters. A detailed analytical, simulation and experimental studies were therefore conducted. The effectiveness of the proposed controller algorithm is validated by the simulation results, which were carried out using MATLAB/SIMULINK environment. The proposed system is built and tested in the laboratory using DSP-DS1104 digital control board for an inductive load. The results revealed that the total harmonic distortion in the supply current was very low. Finally, a good agreement between simulation and experimental results was achieved.

  6. A reduced adaptive observer for multivariable systems. [using reduced dynamic ordering

    NASA Technical Reports Server (NTRS)

    Carroll, R. L.; Lindorff, D. P.

    1973-01-01

    An adaptive observer for multivariable systems is presented for which the dynamic order of the observer is reduced, subject to mild restrictions. The observer structure depends directly upon the multivariable structure of the system rather than a transformation to a single-output system. The number of adaptive gains is at most the sum of the order of the system and the number of input parameters being adapted. Moreover, for the relatively frequent specific cases for which the number of required adaptive gains is less than the sum of system order and input parameters, the number of these gains is easily determined by inspection of the system structure. This adaptive observer possesses all the properties ascribed to the single-input single-output adpative observer. Like the other adaptive observers some restriction is required of the allowable system command input to guarantee convergence of the adaptive algorithm, but the restriction is more lenient than that required by the full-order multivariable observer. This reduced observer is not restricted to cycle systems.

  7. Self-tuning multivariable pole placement control of a multizone crystal growth furnace

    NASA Technical Reports Server (NTRS)

    Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.

    1992-01-01

    This paper presents the design and implementation of a multivariable self-tuning temperature controller for the control of lead bromide crystal growth. The crystal grows inside a multizone transparent furnace. There are eight interacting heating zones shaping the axial temperature distribution inside the furnace. A multi-input, multi-output furnace model is identified on-line by a recursive least squares estimation algorithm. A multivariable pole placement controller based on this model is derived and implemented. Comparison between single-input, single-output and multi-input, multi-output self-tuning controllers demonstrates that the zone-to-zone interactions can be minimized better by a multi-input, multi-output controller design. This directly affects the quality of crystal grown.

  8. "Refsdal" Meets Popper: Comparing Predictions of the Re-appearance of the Multiply Imaged Supernova Behind MACSJ1149.5+2223

    NASA Astrophysics Data System (ADS)

    Treu, T.; Brammer, G.; Diego, J. M.; Grillo, C.; Kelly, P. L.; Oguri, M.; Rodney, S. A.; Rosati, P.; Sharon, K.; Zitrin, A.; Balestra, I.; Bradač, M.; Broadhurst, T.; Caminha, G. B.; Halkola, A.; Hoag, A.; Ishigaki, M.; Johnson, T. L.; Karman, W.; Kawamata, R.; Mercurio, A.; Schmidt, K. B.; Strolger, L.-G.; Suyu, S. H.; Filippenko, A. V.; Foley, R. J.; Jha, S. W.; Patel, B.

    2016-01-01

    Supernova “Refsdal,” multiply imaged by cluster MACS1149.5+2223, represents a rare opportunity to make a true blind test of model predictions in extragalactic astronomy, on a timescale that is short compared to a human lifetime. In order to take advantage of this event, we produced seven gravitational lens models with five independent methods, based on Hubble Space Telescope (HST) Hubble Frontier Field images, along with extensive spectroscopic follow-up observations by HST, the Very Large and the Keck Telescopes. We compare the model predictions and show that they agree reasonably well with the measured time delays and magnification ratios between the known images, even though these quantities were not used as input. This agreement is encouraging, considering that the models only provide statistical uncertainties, and do not include additional sources of uncertainties such as structure along the line of sight, cosmology, and the mass sheet degeneracy. We then present the model predictions for the other appearances of supernova “Refsdal.” A future image will reach its peak in the first half of 2016, while another image appeared between 1994 and 2004. The past image would have been too faint to be detected in existing archival images. The future image should be approximately one-third as bright as the brightest known image (I.e., {H}{{AB}}≈ 25.7 mag at peak and {H}{{AB}}≈ 26.7 mag six months before peak), and thus detectable in single-orbit HST images. We will find out soon whether our predictions are correct.

  9. 1.2 MW peak power, all-solid-state picosecond laser with a microchip laser seed and a high gain single-passing bounce geometry amplifier

    NASA Astrophysics Data System (ADS)

    Wang, Chunhua; Shen, Lifeng; Zhao, Zhiliang; Liu, Bin; Jiang, Hongbo; Chen, Jun; Liu, Chong

    2016-11-01

    A semiconductor saturable absorber mirror (SESAM) based passively Q-switched microchip Nd:YVO4 seed laser with pulse duration of 90 ps at repetition rate of 100 kHz is amplified by single-passing a Nd:YVO4 bounce amplifier with varying seed input power from 20 μW to 10 mW. The liquid pure metal greasy thermally conductive material is used to replace the traditional thin indium foil as the thermal contact material for better heat load transfer of the Nd:YVO4 bounce amplifier. Temperature distribution at the pump surface is measured by an infrared imager to compare with the numerically simulated results. A highest single-passing output power of 11.3 W is obtained for 10 mW averaged seed power, achieving a pulse peak power of ~1.25 MW and pulse energy of ~113 μJ. The beam quality is well preserved with M2 ≤1.25. The simple configuration of this bounce laser amplifier made the system flexible, robust and cost-effective, showing attractive potential for further applications.

  10. Using virtual data for training deep model for hand gesture recognition

    NASA Astrophysics Data System (ADS)

    Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.

    2018-05-01

    Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.

  11. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  12. Color constancy using bright-neutral pixels

    NASA Astrophysics Data System (ADS)

    Wang, Yanfang; Luo, Yupin

    2014-03-01

    An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.

  13. MMX-I: data-processing software for multimodal X-ray imaging and tomography.

    PubMed

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-05-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.

  14. Widespread Decrease of Nicotinic Acetylcholine Receptors in Parkinson's Disease

    PubMed Central

    Ichise, Masanori; Zoghbi, Sami S; Liow, Jeih-San; Ghose, Subroto; Vines, Douglass C; Sangare, Janet; Lu, Jian-Qiang; Cropley, Vanessa L; Iida, Hidehiro; Kim, Kyeong Min; Cohen, Robert M; Bara-Jimenez, William; Ravina, Bernard; Innis, Robert B

    2005-01-01

    Nicotinic acetylcholine receptors (nAChRs) have close interactions with the dopaminergic system and play critical roles in cognitive function. nAChRs were imaged in 10 non-demented Parkinson's disease (PD) patients and 15 age-matched healthy subjects using a single photon emission computed tomography ligand [123I]5-iodo-3-[2(S)-2-azetidinylmethoxy]pyridine. Using an arterial input function, we measured the total distribution volume (V; specific plus non-displaceable) as well as the delivery (K1). PD showed a widespread significant decrease (∼10%) of V in both cortical and subcortical regions without a significant change in K1. These results indicate the importance of extending the study to demented patients. PMID:16374823

  15. Apparatus and method for combining light from two or more fibers into a single fiber

    DOEpatents

    Klingsporn, Paul Edward

    2007-02-20

    An apparatus and method for combining light signals carried on a plurality of input fibers onto a single receiving fiber with a high degree of efficiency. The apparatus broadly comprises the receiving fiber and a plurality of input fiber-lens assemblies, with each fiber lens assembly including an input fiber; a collimating lens interposed between the input fiber and the receiving fiber and adapted to collimate the light signal; and a focusing lens interposed between the collimating lens and the receiving fiber and adapted to focus the collimated light signal onto the face of the receiving fiber. The components of each fiber-lens assembly are oriented along an optic axis that is inclined relative to the receiving fiber, with the inclination angle depending at least in part on the input fiber's numerical aperture and the focal lengths and diameters of the collimating and focusing lenses.

  16. Apparatus and method for combining light from two or more fibers into a single fiber

    DOEpatents

    Klingsporn, Paul Edward

    2006-03-14

    An apparatus and method for combining light signals carried on a plurality of input fibers onto a single receiving fiber with a high degree of efficiency. The apparatus broadly comprises the receiving fiber and a plurality of input fiber-lens assemblies, with each fiber lens assembly including an input fiber; a collimating lens interposed between the input fiber and the receiving fiber and adapted to collimate the light signal; and a focusing lens interposed between the collimating lens and the receiving fiber and adapted to focus the collimated light signal onto the face of the receiving fiber. The components of each fiber-lens assembly are oriented along an optic axis that is inclined relative to the receiving fiber, with the inclination angle depending at least in part on the input fiber's numerical aperture and the focal lengths and diameters of the collimating and focusing lenses.

  17. The NMR phased array.

    PubMed

    Roemer, P B; Edelstein, W A; Hayes, C E; Souza, S P; Mueller, O M

    1990-11-01

    We describe methods for simultaneously acquiring and subsequently combining data from a multitude of closely positioned NMR receiving coils. The approach is conceptually similar to phased array radar and ultrasound and hence we call our techniques the "NMR phased array." The NMR phased array offers the signal-to-noise ratio (SNR) and resolution of a small surface coil over fields-of-view (FOV) normally associated with body imaging with no increase in imaging time. The NMR phased array can be applied to both imaging and spectroscopy for all pulse sequences. The problematic interactions among nearby surface coils is eliminated (a) by overlapping adjacent coils to give zero mutual inductance, hence zero interaction, and (b) by attaching low input impedance preamplifiers to all coils, thus eliminating interference among next nearest and more distant neighbors. We derive an algorithm for combining the data from the phased array elements to yield an image with optimum SNR. Other techniques which are easier to implement at the cost of lower SNR are explored. Phased array imaging is demonstrated with high resolution (512 x 512, 48-cm FOV, and 32-cm FOV) spin-echo images of the thoracic and lumbar spine. Data were acquired from four-element linear spine arrays, the first made of 12-cm square coils and the second made of 8-cm square coils. When compared with images from a single 15 x 30-cm rectangular coil and identical imaging parameters, the phased array yields a 2X and 3X higher SNR at the depth of the spine (approximately 7 cm).

  18. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    NASA Astrophysics Data System (ADS)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  19. Imaged Document Optical Correlation and Conversion System (IDOCCS)

    NASA Astrophysics Data System (ADS)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-03-01

    Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). In addition, many organizations are converting their paper archives to electronic images, which are stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources. The Imaged Document Optical Correlation and Conversion System (IDOCCS) provides a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval capability of document images. The IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and can even determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo, or documents with a particular individual's signature block, can be singled out. With this dual capability, IDOCCS outperforms systems that rely on optical character recognition as a basis for indexing and storing only the textual content of documents for later retrieval.

  20. Automated detection of open magnetic field regions in EUV images

    NASA Astrophysics Data System (ADS)

    Krista, Larisza Diana; Reinard, Alysha

    2016-05-01

    Open magnetic regions on the Sun are either long-lived (coronal holes) or transient (dimmings) in nature, but both appear as dark regions in EUV images. For this reason their detection can be done in a similar way. As coronal holes are often large and long-lived in comparison to dimmings, their detection is more straightforward. The Coronal Hole Automated Recognition and Monitoring (CHARM) algorithm detects coronal holes using EUV images and a magnetogram. The EUV images are used to identify dark regions, and the magnetogam allows us to determine if the dark region is unipolar - a characteristic of coronal holes. There is no temporal sensitivity in this process, since coronal hole lifetimes span days to months. Dimming regions, however, emerge and disappear within hours. Hence, the time and location of a dimming emergence need to be known to successfully identify them and distinguish them from regular coronal holes. Currently, the Coronal Dimming Tracker (CoDiT) algorithm is semi-automated - it requires the dimming emergence time and location as an input. With those inputs we can identify the dimming and track it through its lifetime. CoDIT has also been developed to allow the tracking of dimmings that split or merge - a typical feature of dimmings.The advantage of these particular algorithms is their ability to adapt to detecting different types of open field regions. For coronal hole detection, each full-disk solar image is processed individually to determine a threshold for the image, hence, we are not limited to a single pre-determined threshold. For dimming regions we also allow individual thresholds for each dimming, as they can differ substantially. This flexibility is necessary for a subjective analysis of the studied regions. These algorithms were developed with the goal to allow us better understand the processes that give rise to eruptive and non-eruptive open field regions. We aim to study how these regions evolve over time and what environmental factors influence their growth and decay over short and long time-periods (days to solar cycles).

  1. Staring 2-D hadamard transform spectral imager

    DOEpatents

    Gentry, Stephen M [Albuquerque, NM; Wehlburg, Christine M [Albuquerque, NM; Wehlburg, Joseph C [Albuquerque, NM; Smith, Mark W [Albuquerque, NM; Smith, Jody L [Albuquerque, NM

    2006-02-07

    A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.

  2. A manual for microcomputer image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, P.M.; Ranken, D.M.; George, J.S.

    1989-12-01

    This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less

  3. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  4. A mathematical model of neuro-fuzzy approximation in image classification

    NASA Astrophysics Data System (ADS)

    Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.

    2016-06-01

    Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.

  5. Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Ullah, Sana; Ullah, Sehat

    2018-01-01

    Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.

  6. Scientific Visualization and Computational Science: Natural Partners

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuo, J; Su, K; Department of Radiology, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, Ohio

    Purpose: Accurate and robust photon attenuation derived from MR is essential for PET/MR and MR-based radiation treatment planning applications. Although the fuzzy C-means (FCM) algorithm has been applied for pseudo-CT generation, the input feature combination and the number of clusters have not been optimized. This study aims to optimize both for clinically practical pseudo-CT generation. Methods: Nine volunteers were recruited. A 190-second, single-acquisition UTE-mDixon with 25% (angular) sampling and 3D radial readout was performed to acquire three primitive MR features at TEs of 0.1, 1.5, and 2.8 ms: the free-induction-decay (FID), the first and the second echo images. Three derivedmore » images, Dixon-fat and Dixon-water generated by two-point Dixon water/fat separation, and R2* (1/T2*) map, were also created. To identify informative inputs for generating a pseudo-CT image volume, all 63 combinations, choosing one to six of the feature images, were used as inputs to FCM for pseudo-CT generation. Further, the number of clusters was varied from four to seven to find the optimal approach. Mean prediction deviation (MPD), mean absolute prediction deviation (MAPD), and correlation coefficient (R) of different combinations were compared for feature selection. Results: Among the 63 feature combinations, the four that resulted in the best MAPD and R were further compared along with the set containing all six features. The results suggested that R2* and Dixon-water are the most informative features. Further, including FID also improved the performance of pseudo-CT generation. Consequently, the set containing FID, Dixon-water, and R2* resulted in the most accurate, robust pseudo-CT when the number of cluster equals to five (5C). The clusters were interpreted as air, fat, bone, brain, and fluid. The six-cluster Result additionally included bone marrow. Conclusion: The results suggested that FID, Dixon-water, R2* are the most important features. The findings can be used to facilitate pseudo-CT generation for unsupervised clustering. Please note that the project was completed with partial funding from the Ohio Department of Development grant TECH 11-063 and a sponsored research agreement with Philips Healthcare that is managed by Case Western Reserve University. As noted in the affiliations, some of the authors are Philips employees.« less

  8. Deep multi-spectral ensemble learning for electronic cleansing in dual-energy CT colonography

    NASA Astrophysics Data System (ADS)

    Tachibana, Rie; Näppi, Janne J.; Hironaka, Toru; Kim, Se Hyung; Yoshida, Hiroyuki

    2017-03-01

    We developed a novel electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC) based on an ensemble deep convolution neural network (DCNN) and multi-spectral multi-slice image patches. In the method, an ensemble DCNN is used to classify each voxel of a DE-CTC image volume into five classes: luminal air, soft tissue, tagged fecal materials, and partial-volume boundaries between air and tagging and those between soft tissue and tagging. Each DCNN acts as a voxel classifier, where an input image patch centered at the voxel is generated as input to the DCNNs. An image patch has three channels that are mapped from a region-of-interest containing the image plane of the voxel and the two adjacent image planes. Six different types of spectral input image datasets were derived using two dual-energy CT images, two virtual monochromatic images, and two material images. An ensemble DCNN was constructed by use of a meta-classifier that combines the output of multiple DCNNs, each of which was trained with a different type of multi-spectral image patches. The electronically cleansed CTC images were calculated by removal of regions classified as other than soft tissue, followed by a colon surface reconstruction. For pilot evaluation, 359 volumes of interest (VOIs) representing sources of subtraction artifacts observed in current EC schemes were sampled from 30 clinical CTC cases. Preliminary results showed that the ensemble DCNN can yield high accuracy in labeling of the VOIs, indicating that deep learning of multi-spectral EC with multi-slice imaging could accurately remove residual fecal materials from CTC images without generating major EC artifacts.

  9. Assessing the skeletal age from a hand radiograph: automating the Tanner-Whitehouse method

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; van Ginneken, Bram; Maas, Casper A.; Beek, Frederik J. A.; Viergever, Max A.

    2003-05-01

    The skeletal maturity of children is usually assessed from a standard radiograph of the left hand and wrist. An established clinical method to determine the skeletal maturity is the Tanner-Whitehouse (TW2) method. This method divides the skeletal development into several stages (labelled A, B, ...,I). We are developing an automated system based on this method. In this work we focus on assigning a stage to one region of interest (ROI), the middle phalanx of the third finger. We classify each ROI as follows. A number of ROIs which have been assigned a certain stage by a radiologist are used to construct a mean image for that stage. For a new input ROI, landmarks are detected by using an Active Shape Model. These are used to align the mean images with the input image. Subsequently the correlation between each transformed mean stage image and the input is calculated. The input ROI can be assigned to the stage with the highest correlation directly, or the values can be used as features in a classifier. The method was tested on 71 cases ranging from stage E to I. The ROI was staged correctly in 73.2% of all cases and in 97.2% of all incorrectly staged cases the error was not more than one stage.

  10. Arterial input function derived from pairwise correlations between PET-image voxels.

    PubMed

    Schain, Martin; Benjaminsson, Simon; Varnäs, Katarina; Forsberg, Anton; Halldin, Christer; Lansner, Anders; Farde, Lars; Varrone, Andrea

    2013-07-01

    A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [(11)C]flumazenil and [(11)C]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (VT) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in VT was good (∼3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.

  11. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  12. Auto and hetero-associative memory using a 2-D optical logic gate

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor)

    1992-01-01

    An optical system for auto-associative and hetero-associative recall utilizing Hamming distance as the similarity measure between a binary input image vector V(sup k) and a binary image vector V(sup m) in a first memory array using an optical Exclusive-OR gate for multiplication of each of a plurality of different binary image vectors in memory by the input image vector. After integrating the light of each product V(sup k) x V(sup m), a shortest Hamming distance detection electronics module determines which product has the lowest light intensity and emits a signal that activates a light emitting diode to illuminate a corresponding image vector in a second memory array for display. That corresponding image vector is identical to the memory image vector V(sup m) in the first memory array for auto-associative recall or related to it, such as by name, for hetero-associative recall.

  13. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  14. 3D shape recovery from image focus using gray level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  15. Integrated editing system for Japanese text and image information "Linernote"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuto

    Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.

  16. A two-step A/D conversion and column self-calibration technique for low noise CMOS image sensors.

    PubMed

    Bae, Jaeyoung; Kim, Daeyun; Ham, Seokheon; Chae, Youngcheol; Song, Minkyu

    2014-07-04

    In this paper, a 120 frames per second (fps) low noise CMOS Image Sensor (CIS) based on a Two-Step Single Slope ADC (TS SS ADC) and column self-calibration technique is proposed. The TS SS ADC is suitable for high speed video systems because its conversion speed is much faster (by more than 10 times) than that of the Single Slope ADC (SS ADC). However, there exist some mismatching errors between the coarse block and the fine block due to the 2-step operation of the TS SS ADC. In general, this makes it difficult to implement the TS SS ADC beyond a 10-bit resolution. In order to improve such errors, a new 4-input comparator is discussed and a high resolution TS SS ADC is proposed. Further, a feedback circuit that enables column self-calibration to reduce the Fixed Pattern Noise (FPN) is also described. The proposed chip has been fabricated with 0.13 μm Samsung CIS technology and the chip satisfies the VGA resolution. The pixel is based on the 4-TR Active Pixel Sensor (APS). The high frame rate of 120 fps is achieved at the VGA resolution. The measured FPN is 0.38 LSB, and measured dynamic range is about 64.6 dB.

  17. Highly-Integrated CMOS Interface Circuits for SiPM-Based PET Imaging Systems.

    PubMed

    Dey, Samrat; Lewellen, Thomas K; Miyaoka, Robert S; Rudell, Jacques C

    2012-01-01

    Recent developments in the area of Positron Emission Tomography (PET) detectors using Silicon Photomultipliers (SiPMs) have demonstrated the feasibility of higher resolution PET scanners due to a significant reduction in the detector form factor. The increased detector density requires a proportionally larger number of channels to interface the SiPM array with the backend digital signal processing necessary for eventual image reconstruction. This work presents a CMOS ASIC design for signal reducing readout electronics in support of an 8×8 silicon photomultiplier array. The row/column/diagonal summation circuit significantly reduces the number of required channels, reducing the cost of subsequent digitizing electronics. Current amplifiers are used with a single input from each SiPM cathode. This approach helps to reduce the detector loading, while generating all the necessary row, column and diagonal addressing information. In addition, the single current amplifier used in our Pulse-Positioning architecture facilitates the extraction of pulse timing information. Other components under design at present include a current-mode comparator which enables threshold detection for dark noise current reduction, a transimpedance amplifier and a variable output impedance I/O driver which adapts to a wide range of loading conditions between the ASIC and lines with the off-chip Analog-to-Digital Converters (ADCs).

  18. Highly-Integrated CMOS Interface Circuits for SiPM-Based PET Imaging Systems

    PubMed Central

    Dey, Samrat; Lewellen, Thomas K.; Miyaoka, Robert S.; Rudell, Jacques C.

    2013-01-01

    Recent developments in the area of Positron Emission Tomography (PET) detectors using Silicon Photomultipliers (SiPMs) have demonstrated the feasibility of higher resolution PET scanners due to a significant reduction in the detector form factor. The increased detector density requires a proportionally larger number of channels to interface the SiPM array with the backend digital signal processing necessary for eventual image reconstruction. This work presents a CMOS ASIC design for signal reducing readout electronics in support of an 8×8 silicon photomultiplier array. The row/column/diagonal summation circuit significantly reduces the number of required channels, reducing the cost of subsequent digitizing electronics. Current amplifiers are used with a single input from each SiPM cathode. This approach helps to reduce the detector loading, while generating all the necessary row, column and diagonal addressing information. In addition, the single current amplifier used in our Pulse-Positioning architecture facilitates the extraction of pulse timing information. Other components under design at present include a current-mode comparator which enables threshold detection for dark noise current reduction, a transimpedance amplifier and a variable output impedance I/O driver which adapts to a wide range of loading conditions between the ASIC and lines with the off-chip Analog-to-Digital Converters (ADCs). PMID:24301987

  19. Liouville master equation for multielectron dynamics: Neutralization of highly charged ions near a LiF surface

    NASA Astrophysics Data System (ADS)

    Wirtz, Ludger; Reinhold, Carlos O.; Lemell, Christoph; Burgdörfer, Joachim

    2003-01-01

    We present a simulation of the neutralization of highly charged ions in front of a lithium fluoride surface including the close-collision regime above the surface. The present approach employs a Monte Carlo solution of the Liouville master equation for the joint probability density of the ionic motion and the electronic population of the projectile and the target surface. It includes single as well as double particle-hole (de)excitation processes and incorporates electron correlation effects through the conditional dynamics of population strings. The input in terms of elementary one- and two-electron transfer rates is determined from classical trajectory Monte Carlo calculations as well as quantum-mechanical Auger calculations. For slow projectiles and normal incidence, the ionic motion depends sensitively on the interplay between image acceleration towards the surface and repulsion by an ensemble of positive hole charges in the surface (“trampoline effect”). For Ne10+ we find that image acceleration is dominant and no collective backscattering high above the surface takes place. For grazing incidence, our simulation delineates the pathways to complete neutralization. In accordance with recent experimental observations, most ions are reflected as neutral or even as singly charged negative particles, irrespective of the charge state of the incoming ions.

  20. An intelligent 1:2 demultiplexer as an intracellular theranostic device based on DNA/Ag cluster-gated nanovehicles

    NASA Astrophysics Data System (ADS)

    Ran, Xiang; Wang, Zhenzhen; Ju, Enguo; Pu, Fang; Song, Yanqiu; Ren, Jinsong; Qu, Xiaogang

    2018-02-01

    The logic device demultiplexer can convey a single input signal into one of multiple output channels. The choice of the output channel is controlled by a selector. Several molecules and biomolecules have been used to mimic the function of a demultiplexer. However, the practical application of logic devices still remains a big challenge. Herein, we design and construct an intelligent 1:2 demultiplexer as a theranostic device based on azobenzene (azo)-modified and DNA/Ag cluster-gated nanovehicles. The configuration of azo and the conformation of the DNA ensemble can be regulated by light irradiation and pH, respectively. The demultiplexer which uses light as the input and acid as the selector can emit red fluorescence or a release drug under different conditions. Depending on different cells, the intelligent logic device can select the mode of cellular imaging in healthy cells or tumor therapy in tumor cells. The study incorporates the logic gate with the theranostic device, paving the way for tangible applications of logic gates in the future.

  1. An intelligent 1:2 demultiplexer as an intracellular theranostic device based on DNA/Ag cluster-gated nanovehicles.

    PubMed

    Ran, Xiang; Wang, Zhenzhen; Ju, Enguo; Pu, Fang; Song, Yanqiu; Ren, Jinsong; Qu, Xiaogang

    2018-02-09

    The logic device demultiplexer can convey a single input signal into one of multiple output channels. The choice of the output channel is controlled by a selector. Several molecules and biomolecules have been used to mimic the function of a demultiplexer. However, the practical application of logic devices still remains a big challenge. Herein, we design and construct an intelligent 1:2 demultiplexer as a theranostic device based on azobenzene (azo)-modified and DNA/Ag cluster-gated nanovehicles. The configuration of azo and the conformation of the DNA ensemble can be regulated by light irradiation and pH, respectively. The demultiplexer which uses light as the input and acid as the selector can emit red fluorescence or a release drug under different conditions. Depending on different cells, the intelligent logic device can select the mode of cellular imaging in healthy cells or tumor therapy in tumor cells. The study incorporates the logic gate with the theranostic device, paving the way for tangible applications of logic gates in the future.

  2. Global localization of 3D point clouds in building outline maps of urban outdoor environments.

    PubMed

    Landsiedel, Christian; Wollherr, Dirk

    2017-01-01

    This paper presents a method to localize a robot in a global coordinate frame based on a sparse 2D map containing outlines of building and road network information and no location prior information. Its input is a single 3D laser scan of the surroundings of the robot. The approach extends the generic chamfer matching template matching technique from image processing by including visibility analysis in the cost function. Thus, the observed building planes are matched to the expected view of the corresponding map section instead of to the entire map, which makes a more accurate matching possible. Since this formulation operates on generic edge maps from visual sensors, the matching formulation can be expected to generalize to other input data, e.g., from monocular or stereo cameras. The method is evaluated on two large datasets collected in different real-world urban settings and compared to a baseline method from literature and to the standard chamfer matching approach, where it shows considerable performance benefits, as well as the feasibility of global localization based on sparse building outline data.

  3. TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers.

    PubMed

    Ptito, M; Fumal, A; de Noordhout, A Martens; Schoenen, J; Gjedde, A; Kupers, R

    2008-01-01

    Various non-visual inputs produce cross-modal responses in the visual cortex of early blind subjects. In order to determine the qualitative experience associated with these occipital activations, we systematically stimulated the entire occipital cortex using single pulse transcranial magnetic stimulation (TMS) in early blind subjects and in blindfolded seeing controls. Whereas blindfolded seeing controls reported only phosphenes following occipital cortex stimulation, some of the blind subjects reported tactile sensations in the fingers that were somatotopically organized onto the visual cortex. The number of cortical sites inducing tactile sensations appeared to be related to the number of hours of Braille reading per day, Braille reading speed and dexterity. These data, taken in conjunction with previous anatomical, behavioural and functional imaging results, suggest the presence of a polysynaptic cortical pathway between the somatosensory cortex and the visual cortex in early blind subjects. These results also add new evidence that the activity of the occipital lobe in the blind takes its qualitative expression from the character of its new input source, therefore supporting the cortical deference hypothesis.

  4. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  5. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  6. Poster - Thur Eve - 05: Safety systems and failure modes and effects analysis for a magnetic resonance image guided radiation therapy system.

    PubMed

    Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D

    2012-07-01

    An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.

  7. Solid-state radar switchboard

    NASA Astrophysics Data System (ADS)

    Thiebaud, P.; Cross, D. C.

    1980-07-01

    A new solid-state radar switchboard equipped with 16 input ports which will output data to 16 displays is presented. Each of the ports will handle a single two-dimensional radar input, or three ports will accommodate a three-dimensional radar input. A video switch card of the switchboard is used to switch all signals, with the exception of the IFF-mode-control lines. Each card accepts inputs from up to 16 sources and can pass a signal with bandwidth greater than 20 MHz to the display assigned to that card. The synchro amplifier of current systems has been eliminated and in the new design each PPI receives radar data via a single coaxial cable. This significant reduction in cabling is achieved by adding a serial-to-parallel interface and a digital-to-synchro converter located at the PPI.

  8. High-order motor cortex in rats receives somatosensory inputs from the primary motor cortex via cortico-cortical pathways.

    PubMed

    Kunori, Nobuo; Takashima, Ichiro

    2016-12-01

    The motor cortex of rats contains two forelimb motor areas; the caudal forelimb area (CFA) and the rostral forelimb area (RFA). Although the RFA is thought to correspond to the premotor and/or supplementary motor cortices of primates, which are higher-order motor areas that receive somatosensory inputs, it is unknown whether the RFA of rats receives somatosensory inputs in the same manner. To investigate this issue, voltage-sensitive dye (VSD) imaging was used to assess the motor cortex in rats following a brief electrical stimulation of the forelimb. This procedure was followed by intracortical microstimulation (ICMS) mapping to identify the motor representations in the imaged cortex. The combined use of VSD imaging and ICMS revealed that both the CFA and RFA received excitatory synaptic inputs after forelimb stimulation. Further evaluation of the sensory input pathway to the RFA revealed that the forelimb-evoked RFA response was abolished either by the pharmacological inactivation of the CFA or a cortical transection between the CFA and RFA. These results suggest that forelimb-related sensory inputs would be transmitted to the RFA from the CFA via the cortico-cortical pathway. Thus, the present findings imply that sensory information processed in the RFA may be used for the generation of coordinated forelimb movements, which would be similar to the function of the higher-order motor cortex in primates. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Image segmentation algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  10. Multiclassifier fusion in human brain MR segmentation: modelling convergence.

    PubMed

    Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander

    2006-01-01

    Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.

  11. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  12. Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control

    NASA Astrophysics Data System (ADS)

    Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José

    2017-03-01

    We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.

  13. Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control.

    PubMed

    Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José

    2017-03-31

    We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.

  14. Texas two-step: a framework for optimal multi-input single-output deconvolution.

    PubMed

    Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G

    2007-11-01

    Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.

  15. OCR enhancement through neighbor embedding and fast approximate nearest neighbors

    NASA Astrophysics Data System (ADS)

    Smith, D. C.

    2012-10-01

    Generic optical character recognition (OCR) engines often perform very poorly in transcribing scanned low resolution (LR) text documents. To improve OCR performance, we apply the Neighbor Embedding (NE) single-image super-resolution (SISR) technique to LR scanned text documents to obtain high resolution (HR) versions, which we subsequently process with OCR. For comparison, we repeat this procedure using bicubic interpolation (BI). We demonstrate that mean-square errors (MSE) in NE HR estimates do not increase substantially when NE is trained in one Latin font style and tested in another, provided both styles belong to the same font category (serif or sans serif). This is very important in practice, since for each font size, the number of training sets required for each category may be reduced from dozens to just one. We also incorporate randomized k-d trees into our NE implementation to perform approximate nearest neighbor search, and obtain a 1000x speed up of our original NE implementation, with negligible MSE degradation. This acceleration also made it practical to combine all of our size-specific NE Latin models into a single Universal Latin Model (ULM). The ULM eliminates the need to determine the unknown font category and size of an input LR text document and match it to an appropriate model, a very challenging task, since the dpi (pixels per inch) of the input LR image is generally unknown. Our experiments show that OCR character error rates (CER) were over 90% when we applied the Tesseract OCR engine to LR text documents (scanned at 75 dpi and 100 dpi) in the 6-10 pt range. By contrast, using k-d trees and the ULM, CER after NE preprocessing averaged less than 7% at 3x (100 dpi LR scanning) and 4x (75 dpi LR scanning) magnification, over an order of magnitude improvement. Moreover, CER after NE preprocessing was more that 6 times lower on average than after BI preprocessing.

  16. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  17. Tapered waveguides for guided wave optics.

    PubMed

    Campbell, J C

    1979-03-15

    Strip waveguides having half-paraboloid shaped tapers that permit efficient fiber to waveguide coupling have been fabricated by Ag ion exchange in soda-lime glass. A reduction in the input coupling loss has been accomplished by tailoring the diffusion to provide a gradual transition from a single-mode waveguide to a multimode waveguide having cross-sectional dimensions comparable to the core diameter of a single-mode fiber. Waveguides without tapers exhibit an attenuation of 1.0 dB/cm and an input coupling loss of 0.6 dB. The additional loss introduced by the tapered region is 0.5 dB. By way of contrast, an input coupling loss of 2.4 dB is obtained by coupling directly to a single-mode waveguide, indicating a net improvement of 1.3 dB for the tapered waveguides.

  18. Deformable Image Registration based on Similarity-Steered CNN Regression.

    PubMed

    Cao, Xiaohuan; Yang, Jianhua; Zhang, Jun; Nie, Dong; Kim, Min-Jeong; Wang, Qian; Shen, Dinggang

    2017-09-01

    Existing deformable registration methods require exhaustively iterative optimization, along with careful parameter tuning, to estimate the deformation field between images. Although some learning-based methods have been proposed for initiating deformation estimation, they are often template-specific and not flexible in practical use. In this paper, we propose a convolutional neural network (CNN) based regression model to directly learn the complex mapping from the input image pair (i.e., a pair of template and subject) to their corresponding deformation field. Specifically, our CNN architecture is designed in a patch-based manner to learn the complex mapping from the input patch pairs to their respective deformation field. First, the equalized active-points guided sampling strategy is introduced to facilitate accurate CNN model learning upon a limited image dataset. Then, the similarity-steered CNN architecture is designed, where we propose to add the auxiliary contextual cue, i.e., the similarity between input patches, to more directly guide the learning process. Experiments on different brain image datasets demonstrate promising registration performance based on our CNN model. Furthermore, it is found that the trained CNN model from one dataset can be successfully transferred to another dataset, although brain appearances across datasets are quite variable.

  19. Semantic Image Segmentation with Contextual Hierarchical Models.

    PubMed

    Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-05-01

    Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).

  20. Versatile tunable current-mode universal biquadratic filter using MO-DVCCs and MOSFET-based electronic resistors.

    PubMed

    Chen, Hua-Pin

    2014-01-01

    This paper presents a versatile tunable current-mode universal biquadratic filter with four-input and three-output employing only two multioutput differential voltage current conveyors (MO-DVCCs), two grounded capacitors, and a well-known method for replacement of three grounded resistors by MOSFET-based electronic resistors. The proposed configuration exhibits high-output impedance which is important for easy cascading in the current-mode operations. The proposed circuit can be used as either a two-input three-output circuit or a three-input single-output circuit. In the operation of two-input three-output circuit, the bandpass, highpass, and bandreject filtering responses can be realized simultaneously while the allpass filtering response can be easily obtained by connecting appropriated output current directly without using additional stages. In the operation of three-input single-output circuit, all five generic filtering functions can be easily realized by selecting different three-input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no inverting-type input current signals are imposed. All the passive and active sensitivities are low. Postlayout simulations were carried out to verify the functionality of the design.

  1. Versatile Tunable Current-Mode Universal Biquadratic Filter Using MO-DVCCs and MOSFET-Based Electronic Resistors

    PubMed Central

    2014-01-01

    This paper presents a versatile tunable current-mode universal biquadratic filter with four-input and three-output employing only two multioutput differential voltage current conveyors (MO-DVCCs), two grounded capacitors, and a well-known method for replacement of three grounded resistors by MOSFET-based electronic resistors. The proposed configuration exhibits high-output impedance which is important for easy cascading in the current-mode operations. The proposed circuit can be used as either a two-input three-output circuit or a three-input single-output circuit. In the operation of two-input three-output circuit, the bandpass, highpass, and bandreject filtering responses can be realized simultaneously while the allpass filtering response can be easily obtained by connecting appropriated output current directly without using additional stages. In the operation of three-input single-output circuit, all five generic filtering functions can be easily realized by selecting different three-input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no inverting-type input current signals are imposed. All the passive and active sensitivities are low. Postlayout simulations were carried out to verify the functionality of the design. PMID:24982963

  2. Versatile current-mode universal biquadratic filter using DO-CCIIs

    NASA Astrophysics Data System (ADS)

    Chen, Hua-Pin

    2013-07-01

    In this article, a new three-input and three-output versatile current-mode universal biquadratic filter is proposed. The circuit employs three dual-output current conveyors (DO-CCIIs) as active elements together with three grounded resistors and two grounded capacitors. The proposed configuration exhibits low-input impedance and high-output impedance which is important for easy cascading in the current-mode operations. It can be used as either a single-input and three-output or three-input and two-output circuit. In the operation of single-input and three-output circuit, the lowpass, bandpass and bandreject can be realised simultaneously, while the highpass filtering response can be easily obtained by connecting appropriated output current directly without using addition stages. In the operation of three-input and two-output circuit, all five generic filtering functions can be easily realised by selecting different three input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no component matching conditions or inverting-type input current signals are imposed. All the passive and active sensitivities are low. HSPICE simulation results based on using TSMC 0.18 µm 1P6M CMOS process technology and supply voltages ±0.9 V to verify the theoretical analysis.

  3. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  4. Imaged document information location and extraction using an optical correlator

    NASA Astrophysics Data System (ADS)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-12-01

    Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). Many of these organizations are converting their paper archives to electronic images, which are then stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources and provide for rapid access to the information contained within these imaged documents. To meet this need, Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provide a means for the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and has the potential to determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.

  5. Training system for digital mammographic diagnoses of breast cancer

    NASA Astrophysics Data System (ADS)

    Thomaz, R. L.; Nirschl Crozara, M. G.; Patrocinio, A. C.

    2013-03-01

    As the technology evolves, the analog mammography systems are being replaced by digital systems. The digital system uses video monitors as the display of mammographic images instead of the previously used screen-film and negatoscope for analog images. The change in the way of visualizing mammographic images may require a different approach for training the health care professionals in diagnosing the breast cancer with digital mammography. Thus, this paper presents a computational approach to train the health care professionals providing a smooth transition between analog and digital technology also training to use the advantages of digital image processing tools to diagnose the breast cancer. This computational approach consists of a software where is possible to open, process and diagnose a full mammogram case from a database, which has the digital images of each of the mammographic views. The software communicates with a gold standard digital mammogram cases database. This database contains the digital images in Tagged Image File Format (TIFF) and the respective diagnoses according to BI-RADSTM, these files are read by software and shown to the user as needed. There are also some digital image processing tools that can be used to provide better visualization of each single image. The software was built based on a minimalist and a user-friendly interface concept that might help in the smooth transition. It also has an interface for inputting diagnoses from the professional being trained, providing a result feedback. This system has been already completed, but hasn't been applied to any professional training yet.

  6. Interferometric Quantum-Nondemolition Single-Photon Detectors

    NASA Technical Reports Server (NTRS)

    Kok, Peter; Lee, Hwang; Dowling, Jonathan

    2007-01-01

    Two interferometric quantum-nondemolition (QND) devices have been proposed: (1) a polarization-independent device and (2) a polarization-preserving device. The prolarization-independent device works on an input state of up to two photons, whereas the polarization-preserving device works on a superposition of vacuum and single- photon states. The overall function of the device would be to probabilistically generate a unique detector output only when its input electromagnetic mode was populated by a single photon, in which case its output mode would also be populated by a single photon. Like other QND devices, the proposed devices are potentially useful for a variety of applications, including such areas of NASA interest as quantum computing, quantum communication, detection of gravity waves, as well as pedagogical demonstrations of the quantum nature of light. Many protocols in quantum computation and quantum communication require the possibility of detecting a photon without destroying it. The only prior single- photon-detecting QND device is based on quantum electrodynamics in a resonant cavity and, as such, it depends on the photon frequency. Moreover, the prior device can distinguish only between one photon and no photon. The proposed interferometric QND devices would not depend on frequency and could distinguish between (a) one photon and (b) zero or two photons. The first proposed device is depicted schematically in Figure 1. The input electromagnetic mode would be a superposition of a zero-, a one-, and a two-photon quantum state. The overall function of the device would be to probabilistically generate a unique detector output only when its input electromagnetic mode was populated by a single photon, in which case its output mode also would be populated by a single photon.

  7. Data-driven optimal binning for respiratory motion management in PET.

    PubMed

    Kesner, Adam L; Meier, Joseph G; Burckhardt, Darrell D; Schwartz, Jazmin; Lynch, David A

    2018-01-01

    Respiratory gating has been used in PET imaging to reduce the amount of image blurring caused by patient motion. Optimal binning is an approach for using the motion-characterized data by binning it into a single, easy to understand/use, optimal bin. To date, optimal binning protocols have utilized externally driven motion characterization strategies that have been tuned with population-derived assumptions and parameters. In this work, we are proposing a new strategy with which to characterize motion directly from a patient's gated scan, and use that signal to create a patient/instance-specific optimal bin image. Two hundred and nineteen phase-gated FDG PET scans, acquired using data-driven gating as described previously, were used as the input for this study. For each scan, a phase-amplitude motion characterization was generated and normalized using principle component analysis. A patient-specific "optimal bin" window was derived using this characterization, via methods that mirror traditional optimal window binning strategies. The resulting optimal bin images were validated by correlating quantitative and qualitative measurements in the population of PET scans. In 53% (n = 115) of the image population, the optimal bin was determined to include 100% of the image statistics. In the remaining images, the optimal binning windows averaged 60% of the statistics and ranged between 20% and 90%. Tuning the algorithm, through a single acceptance window parameter, allowed for adjustments of the algorithm's performance in the population toward conservation of motion or reduced noise-enabling users to incorporate their definition of optimal. In the population of images that were deemed appropriate for segregation, average lesion SUV max were 7.9, 8.5, and 9.0 for nongated images, optimal bin, and gated images, respectively. The Pearson correlation of FWHM measurements between optimal bin images and gated images were better than with nongated images, 0.89 and 0.85, respectively. Generally, optimal bin images had better resolution than the nongated images and better noise characteristics than the gated images. We extended the concept of optimal binning to a data-driven form, updating a traditionally one-size-fits-all approach to a conformal one that supports adaptive imaging. This automated strategy was implemented easily within a large population and encapsulated motion information in an easy to use 3D image. Its simplicity and practicality may make this, or similar approaches ideal for use in clinical settings. © 2017 American Association of Physicists in Medicine.

  8. Surprise! Infants Consider Possible Bases of Generalization for a Single Input Example

    ERIC Educational Resources Information Center

    Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh

    2015-01-01

    Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization…

  9. Adaptive Neural Network-Based Event-Triggered Control of Single-Input Single-Output Nonlinear Discrete-Time Systems.

    PubMed

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2016-01-01

    This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.

  10. Neural-adaptive control of single-master-multiple-slaves teleoperation for coordinated multiple mobile manipulators with time-varying communication delays and input uncertainties.

    PubMed

    Li, Zhijun; Su, Chun-Yi

    2013-09-01

    In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.

  11. Chemical sensors are hybrid-input memristors

    NASA Astrophysics Data System (ADS)

    Sysoev, V. I.; Arkhipov, V. E.; Okotrub, A. V.; Pershin, Y. V.

    2018-04-01

    Memristors are two-terminal electronic devices whose resistance depends on the history of input signal (voltage or current). Here we demonstrate that the chemical gas sensors can be considered as memristors with a generalized (hybrid) input, namely, with the input consisting of the voltage, analyte concentrations and applied temperature. The concept of hybrid-input memristors is demonstrated experimentally using a single-walled carbon nanotubes chemical sensor. It is shown that with respect to the hybrid input, the sensor exhibits some features common with memristors such as the hysteretic input-output characteristics. This different perspective on chemical gas sensors may open new possibilities for smart sensor applications.

  12. Analysis of spectral light guidance in specialty fibers

    NASA Astrophysics Data System (ADS)

    Zimmer, Arne W.; Raithel, Philipp; Belz, Mathias; Klein, Karl-Friedrich

    2016-04-01

    A novel experimental set-up for measuring the spectral dependency of light-guidance of specialty non-active multimodefibers will be introduced. Light coupling into the test fiber is realized and controlled with a micro-structured single mode (SM) fiber and an image-system based on a microscope objective The far- and near-field profiles of the SM-fiber will be shown. The inverse far field method is modified and improved by using three wavelengths simultaneously under the same input conditions; the coupling conditions into the test-fiber and the far- and near-field at fiber output are observed with cameras. The numerical aperture (NA) and mode-conversion or focal-ratio-degradation (FRD) are measured in respect to wavelength at three wavelengths in the VIS region. For the analysis, the patterns are captured at varying exposure times to increase the dynamic range and finally analyzed using image processing methods. Characteristic parameters, such as skew mode propagation and ray-conversion, of circular and non-circular MM-fibers will be discussed, taking the surface roughness into account.

  13. Automated grain extraction and classification by combining improved region growing segmentation and shape descriptors in electromagnetic mill classification system

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian

    2018-04-01

    In this paper, the automatic method of grain detection and classification has been presented. As input, it uses a single digital image obtained from milling process of the copper ore with an high-quality digital camera. The grinding process is an extremely energy and cost consuming process, thus granularity evaluation process should be performed with high efficiency and time consumption. The method proposed in this paper is based on the three-stage image processing. First, using Seeded Region Growing (SRG) segmentation with proposed adaptive thresholding based on the calculation of Relative Standard Deviation (RSD) all grains are detected. In the next step results of the detection are improved using information about the shape of the detected grains using distance map. Finally, each grain in the sample is classified into one of the predefined granularity class. The quality of the proposed method has been obtained by using nominal granularity samples, also with a comparison to the other methods.

  14. Learning target masks in infrared linescan imagery

    NASA Astrophysics Data System (ADS)

    Fechner, Thomas; Rockinger, Oliver; Vogler, Axel; Knappe, Peter

    1997-04-01

    In this paper we propose a neural network based method for the automatic detection of ground targets in airborne infrared linescan imagery. Instead of using a dedicated feature extraction stage followed by a classification procedure, we propose the following three step scheme: In the first step of the recognition process, the input image is decomposed into its pyramid representation, thus obtaining a multiresolution signal representation. At the lowest three levels of the Laplacian pyramid a neural network filter of moderate size is trained to indicate the target location. The last step consists of a fusion process of the several neural network filters to obtain the final result. To perform this fusion we use a belief network to combine the various filter outputs in a statistical meaningful way. In addition, the belief network allows the integration of further knowledge about the image domain. By applying this multiresolution recognition scheme, we obtain a nearly scale- and rotational invariant target recognition with a significantly decreased false alarm rate compared with a single resolution target recognition scheme.

  15. How Does the Low-Rank Matrix Decomposition Help Internal and External Learnings for Super-Resolution.

    PubMed

    Wang, Shuang; Yue, Bo; Liang, Xuefeng; Jiao, Licheng

    2018-03-01

    Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane and 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods.

  16. Standoff aircraft IR characterization with ABB dual-band hyper spectral imager

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Lantagne, Stéphane; Bullis, Ritchie D.; Roy, Claude; Vallières, Christian; Levesque, Luc

    2012-09-01

    Remote sensing infrared characterization of rapidly evolving events generally involves the combination of a spectro-radiometer and infrared camera(s) as separated instruments. Time synchronization, spatial coregistration, consistent radiometric calibration and managing several systems are important challenges to overcome; they complicate the target infrared characterization data processing and increase the sources of errors affecting the final radiometric accuracy. MR-i is a dual-band Hyperspectal imaging spectro-radiometer, that combines two 256 x 256 pixels infrared cameras and an infrared spectro-radiometer into one single instrument. This field instrument generates spectral datacubes in the MWIR and LWIR. It is designed to acquire the spectral signatures of rapidly evolving events. The design is modular. The spectrometer has two output ports configured with two simultaneously operated cameras to either widen the spectral coverage or to increase the dynamic range of the measured amplitudes. Various telescope options are available for the input port. Recent platform developments and field trial measurements performances will be presented for a system configuration dedicated to the characterization of airborne targets.

  17. Atlas-based automatic measurements of the morphology of the tibiofemoral joint

    NASA Astrophysics Data System (ADS)

    Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.

    2017-03-01

    Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  18. Atlas-based automatic measurements of the morphology of the tibiofemoral joint.

    PubMed

    Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W

    2017-02-11

    Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  19. A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1989-01-01

    Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.

  20. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1975-01-01

    A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.

  1. The research of multi-frame target recognition based on laser active imaging

    NASA Astrophysics Data System (ADS)

    Wang, Can-jin; Sun, Tao; Wang, Tin-feng; Chen, Juan

    2013-09-01

    Laser active imaging is fit to conditions such as no difference in temperature between target and background, pitch-black night, bad visibility. Also it can be used to detect a faint target in long range or small target in deep space, which has advantage of high definition and good contrast. In one word, it is immune to environment. However, due to the affect of long distance, limited laser energy and atmospheric backscatter, it is impossible to illuminate the whole scene at the same time. It means that the target in every single frame is unevenly or partly illuminated, which make the recognition more difficult. At the same time the speckle noise which is common in laser active imaging blurs the images . In this paper we do some research on laser active imaging and propose a new target recognition method based on multi-frame images . Firstly, multi pulses of laser is used to obtain sub-images for different parts of scene. A denoising method combined homomorphic filter with wavelet domain SURE is used to suppress speckle noise. And blind deconvolution is introduced to obtain low-noise and clear sub-images. Then these sub-images are registered and stitched to combine a completely and uniformly illuminated scene image. After that, a new target recognition method based on contour moments is proposed. Firstly, canny operator is used to obtain contours. For each contour, seven invariant Hu moments are calculated to generate the feature vectors. At last the feature vectors are input into double hidden layers BP neural network for classification . Experiments results indicate that the proposed algorithm could achieve a high recognition rate and satisfactory real-time performance for laser active imaging.

  2. Comparison of CT-derived Ventilation Maps with Deposition Patterns of Inhaled Microspheres in Rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, Rick E.; Lamm, W. J.; Einstein, Daniel R.

    2015-04-01

    Purpose: Computer models for inhalation toxicology and drug-aerosol delivery studies rely on ventilation pattern inputs for predictions of particle deposition and vapor uptake. However, changes in lung mechanics due to disease can impact airflow dynamics and model results. It has been demonstrated that non-invasive, in vivo, 4DCT imaging (3D imaging at multiple time points in the breathing cycle) can be used to map heterogeneities in ventilation patterns under healthy and disease conditions. The purpose of this study was to validate ventilation patterns measured from CT imaging by exposing the same rats to an aerosol of fluorescent microspheres (FMS) and examiningmore » particle deposition patterns using cryomicrotome imaging. Materials and Methods: Six male Sprague-Dawley rats were intratracheally instilled with elastase to a single lobe to induce a heterogeneous disease. After four weeks, rats were imaged over the breathing cycle by CT then immediately exposed to an aerosol of ~1µm FMS for ~5 minutes. After the exposure, the lungs were excised and prepared for cryomicrotome imaging, where a 3D image of FMS deposition was acquired using serial sectioning. Cryomicrotome images were spatially registered to match the live CT images to facilitate direct quantitative comparisons of FMS signal intensity with the CT-based ventilation maps. Results: Comparisons of fractional ventilation in contiguous, non-overlapping, 3D regions between CT-based ventilation maps and FMS images showed strong correlations in fractional ventilation (r=0.888, p<0.0001). Conclusion: We conclude that ventilation maps derived from CT imaging are predictive of the 1µm aerosol deposition used in ventilation-perfusion heterogeneity inhalation studies.« less

  3. Active action potential propagation but not initiation in thalamic interneuron dendrites

    PubMed Central

    Casale, Amanda E.; McCormick, David A.

    2012-01-01

    Inhibitory interneurons of the dorsal lateral geniculate nucleus of the thalamus modulate the activity of thalamocortical cells in response to excitatory input through the release of inhibitory neurotransmitter from both axons and dendrites. The exact mechanisms by which release can occur from dendrites are, however, not well understood. Recent experiments using calcium imaging have suggested that Na/K based action potentials can evoke calcium transients in dendrites via local active conductances, making the back-propagating action potential a candidate for dendritic neurotransmitter release. In this study, we employed high temporal and spatial resolution voltage-sensitive dye imaging to assess the characteristics of dendritic voltage deflections in response to Na/K action potentials in interneurons of the mouse dorsal lateral geniculate nucleus. We found that trains or single action potentials elicited by somatic current injection or local synaptic stimulation led to action potentials that rapidly and actively back-propagated throughout the entire dendritic arbor and into the fine filiform dendritic appendages known to release GABAergic vesicles. Action potentials always appeared first in the soma or proximal dendrite in response to somatic current injection or local synaptic stimulation, and the rapid back-propagation into the dendritic arbor depended upon voltage-gated sodium and TEA-sensitive potassium channels. Our results indicate that thalamic interneuron dendrites integrate synaptic inputs that initiate action potentials, most likely in the axon initial segment, that then back-propagate with high-fidelity into the dendrites, resulting in a nearly synchronous release of GABA from both axonal and dendritic compartments. PMID:22171033

  4. Cat colour vision: evidence for more than one cone process

    PubMed Central

    Daw, N. W.; Pearlman, A. L.

    1970-01-01

    1. The ability of cats to distinguish colours was investigated at mesopic and photopic levels to test the hypothesis that cats discriminate wavelength by using rods in conjunction with a single type of cone. 2. Cats were trained to distinguish red from cyan, and orange from cyan at the mesopic level. They retained the ability to make this discrimination when the coloured stimuli were placed against a background bright enough to saturate the rods. 3. One cat was also tested after being exposed to a bright white light of 9000 cd/m2 for a period of 5 min, and found able to distinguish red from cyan. 4. These results suggest that cats have more than one type of cone. Subsequent recordings from single units in the lateral geniculate nucleus showed that there are rare opponent colour units in layer B with input from a green-absorbing cone and a blue-absorbing cone. ImagesPlate 1 PMID:5500987

  5. Compact universal logic gates realized using quantization of current in nanodevices.

    PubMed

    Zhang, Wancheng; Wu, Nan-Jian; Yang, Fuhua

    2007-12-12

    This paper proposes novel universal logic gates using the current quantization characteristics of nanodevices. In nanodevices like the electron waveguide (EW) and single-electron (SE) turnstile, the channel current is a staircase quantized function of its control voltage. We use this unique characteristic to compactly realize Boolean functions. First we present the concept of the periodic-threshold threshold logic gate (PTTG), and we build a compact PTTG using EW and SE turnstiles. We show that an arbitrary three-input Boolean function can be realized with a single PTTG, and an arbitrary four-input Boolean function can be realized by using two PTTGs. We then use one PTTG to build a universal programmable two-input logic gate which can be used to realize all two-input Boolean functions. We also build a programmable three-input logic gate by using one PTTG. Compared with linear threshold logic gates, with the PTTG one can build digital circuits more compactly. The proposed PTTGs are promising for future smart nanoscale digital system use.

  6. Automatic detection and recognition of multiple macular lesions in retinal optical coherence tomography images with multi-instance multilabel learning

    NASA Astrophysics Data System (ADS)

    Fang, Leyuan; Yang, Liumao; Li, Shutao; Rabbani, Hossein; Liu, Zhimin; Peng, Qinghua; Chen, Xiangdong

    2017-06-01

    Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.

  7. SU-G-BRA-06: Quantification of Tracking Performance of a Multi-Layer Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Y; Rottmann, J; Myronakis, M

    2016-06-15

    Purpose: The purpose of this study was to quantify the improvement in tumor tracking, with and without fiducial markers, afforded by employing a multi-layer (MLI) electronic portal imaging device (EPID) over the current state-of-the-art, single-layer, digital megavolt imager (DMI) architecture. Methods: An ideal observer signal-to-noise ratio (d’) approach was used to quantify the ability of an MLI EPID and a current, state-of-the-art DMI EPID to track lung tumors from the treatment beam’s-eye-view. Using each detector modulation transfer function (MTF) and noise power spectrum (NPS) as inputs, a detection task was employed with object functions describing simple three-dimensional Cartesian shapes (spheresmore » and cylinders). Marker-less tumor tracking algorithms often use texture discrimination to differentiate benign and malignant tissue. The performance of such algorithms is simulated by employing a discrimination task for the ideal observer, which measures the ability of a system to differentiate two image quantities. These were defined as the measured textures for benign and malignant lung tissue. Results: The NNPS of the MLI ∼25% of that of the DMI at the expense of decreased MTF at intermediate frequencies (0.25≤« less

  8. The Effect of Illumination on Stereo DTM Quality: Simulations in Support of Europa Exploration

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Howington-Kraus, E.; Hare, T. M.; Jorda, L.

    2016-06-01

    We have investigated how the quality of stereoscopically measured topography degrades with varying illumination, in particular the ranges of incidence angles and illumination differences over which useful digital topographic models (DTMs) can be recovered. Our approach is to make high-fidelity simulated image pairs of known topography and compare DTMs from stereoanalysis of these images with the input data. Well-known rules of thumb for horizontal resolution (>3-5 pixels) and matching precision (~0.2-0.3 pixels) are generally confirmed, but the best achievable resolution at high incidence angles is ~15 pixels, probably as a result of smoothing internal to the matching algorithm. Single-pass stereo imaging of Europa is likely to yield DTMs of consistent (optimal) quality for all incidence angles ≤85°, and certainly for incidence angles between 40° and 85°. Simulations with pairs of images in which the illumination is not consistent support the utility of shadow tip distance (STD) as a measure of illumination difference, but also suggest new and simpler criteria for evaluating the suitability of stereopairs based on illumination geometry. Our study was motivated by the needs of a mission to Europa, but the approach and (to first order) the results described here are relevant to a wide range of planetary investigations.

  9. A phonology-free mobile communication app.

    PubMed

    Kondapalli, Ananya; Zhang, Lee R; Patel, Shreya; Han, Xiao; Kim, Hee Jin; Li, Xintong; Altschuler, Eric L

    2016-11-01

    Aphasia - loss of comprehension or expression of language - is a devastating functional sequela of stroke. There are as yet no effective methods for rehabilitation of aphasia. An assistive device that allows aphasia patients to communicate and interact at speeds approaching real time is urgently needed. Behavioral and linguistic studies of aphasia patients show that they retain normal thinking processes and most aspects of language. They lack only phonology: the ability to translate (input) and/or output sounds (or written words) such as "ta-ble" into the image of a four-legged object with a top at which one works or eats. We have made a phonology-free communication mobile app that may be useful for patients with aphasia and other communication disorders. Particular innovations of our app include calling Google Images as a "subroutine" to allow a near-infinite number of choices (e.g. food or clothing items) for patients without having to make countless images, and by the use of animation for words, phrases or concepts that cannot be represented by a single image. We have tested our app successfully in one patient. The app may be of great benefit to patients with aphasia and other communication disorders. Implications for Rehabilitation We have made a phonology-free mobile communication app. This app may facilitate communication for patients with aphasia and other communication disorders.

  10. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.

    PubMed

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-03-01

    The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process. The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.

  11. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  12. Infrared dim and small target detecting and tracking method inspired by Human Visual System

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian

    2014-01-01

    Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.

  13. Image processing and recognition for biological images.

    PubMed

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  14. Image classification at low light levels

    NASA Astrophysics Data System (ADS)

    Wernick, Miles N.; Morris, G. Michael

    1986-12-01

    An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.

  15. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  16. Valve system incorporating single failure protection logic

    DOEpatents

    Ryan, Rodger; Timmerman, Walter J. H.

    1980-01-01

    A valve system incorporating single failure protective logic. The system consists of a valve combination or composite valve which allows actuation or de-actuation of a device such as a hydraulic cylinder or other mechanism, integral with or separate from the valve assembly, by means of three independent input signals combined in a function commonly known as two-out-of-three logic. Using the input signals as independent and redundant actuation/de-actuation signals, a single signal failure, or failure of the corresponding valve or valve set, will neither prevent the desired action, nor cause the undesired action of the mechanism.

  17. The adaptive observer. [liapunov synthesis, single-input single-output, and reduced observers

    NASA Technical Reports Server (NTRS)

    Carroll, R. L.

    1973-01-01

    The simple generation of state from available measurements, for use in systems for which the criteria defining the acceptable state behavior mandates a control that is dependent upon unavailable measurement is described as an adaptive means for determining the state of a linear time invariant differential system having unknown parameters. A single input output adaptive observer and the reduced adaptive observer is developed. The basic ideas for both the adaptive observer and the nonadaptive observer are examined. A survey of the Liapunov synthesis technique is taken, and the technique is applied to adaptive algorithm for the adaptive observer.

  18. Numerical analysis of wavefront aberration correction using multielectrode electrowetting-based devices.

    PubMed

    Zohrabi, Mo; Cormack, Robert H; Mccullough, Connor; Supekar, Omkar D; Gibson, Emily A; Bright, Victor M; Gopinath, Juliet T

    2017-12-11

    We present numerical simulations of multielectrode electrowetting devices used in a novel optical design to correct wavefront aberration. Our optical system consists of two multielectrode devices, preceded by a single fixed lens. The multielectrode elements function as adaptive optical devices that can be used to correct aberrations inherent in many imaging setups, biological samples, and the atmosphere. We are able to accurately simulate the liquid-liquid interface shape using computational fluid dynamics. Ray tracing analysis of these surfaces shows clear evidence of aberration correction. To demonstrate the strength of our design, we studied three different input aberrations mixtures that include astigmatism, coma, trefoil, and additional higher order aberration terms, with amplitudes as large as one wave at 633 nm.

  19. Robust decentralized controller for minimizing coupling effect in single inductor multiple output DC-DC converter operating in continuous conduction mode.

    PubMed

    Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das

    2018-02-01

    This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. The ideal imaging AR waveguide

    NASA Astrophysics Data System (ADS)

    Grey, David J.

    2017-06-01

    Imaging waveguides are a key development that are helping to create the Augmented Reality revolution. They have the ability to use a small projector as an input and produce a wide field of view, large eyebox, full colour, see-through image with good contrast and resolution. WaveOptics is at the forefront of this AR technology and has developed and demonstrated an approach which is readily scalable. This paper presents our view of the ideal near-to-eye imaging AR waveguide. This will be a single-layer waveguide which can be manufactured in high volume and low cost, and is suitable for small form factor applications and all-day wear. We discuss the requirements of the waveguide for an excellent user experience. When enhanced (AR) viewing is not required, the waveguide should have at least 90% transmission, no distracting artifacts and should accommodate the user's ophthalmic prescription. When enhanced viewing is required, additionally, the waveguide requires excellent imaging performance, this includes resolution to the limit of human acuity, wide field of view, full colour, high luminance uniformity and contrast. Imaging waveguides are afocal designs and hence cannot provide ophthalmic correction. If the user requires this correction then they must wear either contact lenses, prescription spectacles or inserts. The ideal imaging waveguide would need to cope with all of these situations so we believe it must be capable of providing an eyebox at an eye relief suitable for spectacle wear which covers a significant range of population inter-pupillary distances. We describe the current status of our technology and review existing imaging waveguide technologies against the ideal component.

Top